Press    How Independent Measurement Can Rescue The Ailing Advertising Industry

Original Publisher
Forbes

By Madan Bharadwaj, CTO and Co-Founder at Measured.

Built upon an increasingly complex digital landscape with a history of fierce competition, the advertising ecosystem has long been plagued by a lack of transparency and wavering trust between the brands, platforms and vendors operating within the $600-plus billion global market. Events of the past few years have prompted a massive upheaval and sweeping changes across the industry, leading some close to the chaos to fear paid media’s fragile framework is on the brink of collapse.

Marketers must understand if and how their advertising is working to confidently determine where to spend money next. If measurement data can’t be trusted, the economic structure of the advertising industry collapses. It may be temporarily uncomfortable for those accustomed to the standard operating procedure, but the current shake-up is forcing a long-overdue correction to decades of unacceptable measurement practices — and it’s what we need to right the ship.

For years, led to believe that flawed measurement options were an unfortunate necessity, marketers have had to trust agencies and platforms reporting on their own performance and accept results calculated in secret by multi-touch attribution (MTA) vendors. Today, disruptions fueled by data-privacy concerns, big tech competitive posturing, and a dash of criminal intent have brought us to a boiling point — ineffective measurement systems are no longer a pill marketers can hold their noses and swallow.

Apple’s privacy measures continue to wreak havoc for just about everybody. Facebook, which historically tended to over-report conversions due to shortcomings of last-touch metrics, saw a 40-50% drop in platform-reported conversions since Apple introduced ad tracking transparency (ATT). Snap shares dropped 25% in Q3 due to missed revenue expectations — also attributed to Apple. They are not alone. Every online ad channel is grappling with the fallout from restrictions on user-level tracking and third-party data access.

Adding to the measurement issues caused by data limitations, digital ad fraud is also growing at an alarming rate. One report estimates that 88% of digital ad clicks are fake — cause for concern when marketers are literally paying per click. And, if that isn’t enough to make advertisers scrutinize the performance data they rely on, the world’s two largest ad platforms now face allegations of rigging the programmatic bidding process, among other things, to derive an advantage over competitors.

Feeling powerless, some advertisers will attempt to restore control by diverting ad spend from “struggling” platforms to channels perceived to be less risky. Facebook, Google, and Snap currently occupy the hot seat, but the same issues are impeding every online platform’s ability to target, track and measure advertising. Moving budget from one channel to another, without reliable insight into performance, will only create a false sense of security.

The only way to stabilize the industry is for brands to implement a reliable measurement and reporting that is independent of platform bias or blind spots. It never should have been acceptable for platforms to grade their own homework. Even if the data is accurate, discrepancies between platforms or results bloated by fraud cannot be identified and reconciled with platform reporting alone. Fraud can manufacture clicks, but it cannot buy products. For valuable insight that advertisers can act on, campaign performance must be reconciled with transaction data owned by the brand.

As access to third-party data disintegrates, cohort-based analytics and experiments using first-party data are the future-proof measurement options for marketers. Even without attributing user-level conversions (cookie-style), incrementality experiments can accurately determine how tweaks to a campaign or tactic impact business outcomes. Test and control experiments using geo-matched markets or split audiences built using data from a brand’s CRM file are the only ways to run statistically sound incrementality tests that are independent of platform reporting bias.

Geo experimentation can be applied to measure incrementality in almost any media environment and is especially useful for prospecting on platforms struggling to accurately attribute conversions. Geo testing, also known as matched-market testing, follows the scientific framework of controlled experimentation, but test and control groups are defined by geographic regions, or geos, that are selected to have similar demographics. Geo experiments can calculate the incremental contribution of media to any metric that can be observed at the geo level — no user data required.

By splitting in-house CRM files into customer segments, advertisers can test media combinations to identify the optimal contact strategy for each group. This approach, based on data that the brand owns and trusts, can be used to compare multichannel tactics, reduce unnecessary channel overlap and ultimately increase customer revenue contribution. Many brands don’t even realize the insight and growth potential that already exists within their CRM database.

By mapping geo and CRM test results to source-of-truth transaction data from e-commerce platforms like Shopify or Bigcommerce, advertisers can get a clean read on the true contribution of media at the channel, campaign or ad set level. Trusted insight into the incrementality of media prevents critical budget decisions from being influenced by compromised or inaccurate performance data.

For advertising to survive the stream of changes that are systematically dismantling how the industry has operated for 20+ years, we need to build a framework of transparency and trust amongst all parties involved. That requires independent and transparent measurement of the value buyers are getting from sellers.

Marketers evaluating potential partners for neutral incrementality measurement should inquire about these key criteria.

• Are experiments designed using proven test and control experimentation?

• Can they run independent experiments or do they rely on platform attribution reporting?

• Can they connect to your CRM system for first-party data use?

• Can they reconcile experiment data with transaction data from your e-commerce platform?

As one change is quickly followed by another, the ongoing turbulence has many marketers feeling uneasy and stuck in reactive mode. With an independent and transparent system of measurement, delivered by a neutral party, marketers can protect themselves from the whims of an unpredictable environment. When transparency and trust in advertising measurement are established, confident decision-making will result and stabilize an industry growing weary of balancing on the edge of collapse.

 

The impending demise of user-level tracking across platforms and devices will make complex advertising measurement models like multi-touch attribution (MTA) not just difficult but impossible.

By Madan Bharadwaj, CTO and Co-Founder at Measured.

Built upon an increasingly complex digital landscape with a history of fierce competition, the advertising ecosystem has long been plagued by a lack of transparency and wavering trust between the brands, platforms and vendors operating within the $600-plus billion global market. Events of the past few years have prompted a massive upheaval and sweeping changes across the industry, leading some close to the chaos to fear paid media’s fragile framework is on the brink of collapse.

Marketers must understand if and how their advertising is working to confidently determine where to spend money next. If measurement data can’t be trusted, the economic structure of the advertising industry collapses. It may be temporarily uncomfortable for those accustomed to the standard operating procedure, but the current shake-up is forcing a long-overdue correction to decades of unacceptable measurement practices — and it’s what we need to right the ship.

For years, led to believe that flawed measurement options were an unfortunate necessity, marketers have had to trust agencies and platforms reporting on their own performance and accept results calculated in secret by multi-touch attribution (MTA) vendors. Today, disruptions fueled by data-privacy concerns, big tech competitive posturing, and a dash of criminal intent have brought us to a boiling point — ineffective measurement systems are no longer a pill marketers can hold their noses and swallow.

Apple’s privacy measures continue to wreak havoc for just about everybody. Facebook, which historically tended to over-report conversions due to shortcomings of last-touch metrics, saw a 40-50% drop in platform-reported conversions since Apple introduced ad tracking transparency (ATT). Snap shares dropped 25% in Q3 due to missed revenue expectations — also attributed to Apple. They are not alone. Every online ad channel is grappling with the fallout from restrictions on user-level tracking and third-party data access.

Adding to the measurement issues caused by data limitations, digital ad fraud is also growing at an alarming rate. One report estimates that 88% of digital ad clicks are fake — cause for concern when marketers are literally paying per click. And, if that isn’t enough to make advertisers scrutinize the performance data they rely on, the world’s two largest ad platforms now face allegations of rigging the programmatic bidding process, among other things, to derive an advantage over competitors.

Feeling powerless, some advertisers will attempt to restore control by diverting ad spend from “struggling” platforms to channels perceived to be less risky. Facebook, Google, and Snap currently occupy the hot seat, but the same issues are impeding every online platform’s ability to target, track and measure advertising. Moving budget from one channel to another, without reliable insight into performance, will only create a false sense of security.

The only way to stabilize the industry is for brands to implement a reliable measurement and reporting that is independent of platform bias or blind spots. It never should have been acceptable for platforms to grade their own homework. Even if the data is accurate, discrepancies between platforms or results bloated by fraud cannot be identified and reconciled with platform reporting alone. Fraud can manufacture clicks, but it cannot buy products. For valuable insight that advertisers can act on, campaign performance must be reconciled with transaction data owned by the brand.

As access to third-party data disintegrates, cohort-based analytics and experiments using first-party data are the future-proof measurement options for marketers. Even without attributing user-level conversions (cookie-style), incrementality experiments can accurately determine how tweaks to a campaign or tactic impact business outcomes. Test and control experiments using geo-matched markets or split audiences built using data from a brand’s CRM file are the only ways to run statistically sound incrementality tests that are independent of platform reporting bias.

Geo experimentation can be applied to measure incrementality in almost any media environment and is especially useful for prospecting on platforms struggling to accurately attribute conversions. Geo testing, also known as matched-market testing, follows the scientific framework of controlled experimentation, but test and control groups are defined by geographic regions, or geos, that are selected to have similar demographics. Geo experiments can calculate the incremental contribution of media to any metric that can be observed at the geo level — no user data required.

By splitting in-house CRM files into customer segments, advertisers can test media combinations to identify the optimal contact strategy for each group. This approach, based on data that the brand owns and trusts, can be used to compare multichannel tactics, reduce unnecessary channel overlap and ultimately increase customer revenue contribution. Many brands don’t even realize the insight and growth potential that already exists within their CRM database.

By mapping geo and CRM test results to source-of-truth transaction data from e-commerce platforms like Shopify or Bigcommerce, advertisers can get a clean read on the true contribution of media at the channel, campaign or ad set level. Trusted insight into the incrementality of media prevents critical budget decisions from being influenced by compromised or inaccurate performance data.

For advertising to survive the stream of changes that are systematically dismantling how the industry has operated for 20+ years, we need to build a framework of transparency and trust amongst all parties involved. That requires independent and transparent measurement of the value buyers are getting from sellers.

Marketers evaluating potential partners for neutral incrementality measurement should inquire about these key criteria.

• Are experiments designed using proven test and control experimentation?

• Can they run independent experiments or do they rely on platform attribution reporting?

• Can they connect to your CRM system for first-party data use?

• Can they reconcile experiment data with transaction data from your e-commerce platform?

As one change is quickly followed by another, the ongoing turbulence has many marketers feeling uneasy and stuck in reactive mode. With an independent and transparent system of measurement, delivered by a neutral party, marketers can protect themselves from the whims of an unpredictable environment. When transparency and trust in advertising measurement are established, confident decision-making will result and stabilize an industry growing weary of balancing on the edge of collapse.

Original Publisher
Forbes

 

The impending demise of user-level tracking across platforms and devices will make complex advertising measurement models like multi-touch attribution (MTA) not just difficult but impossible.

Press    Why Google Analytics And Facebook Attribution Reports Will Never Line Up — And What You Can Do About It

Original Publisher
Forbes

The impending demise of user-level tracking across platforms and devices will make complex advertising measurement models like multi-touch attribution (MTA) not just difficult but impossible. As these increasing limitations push advertisers back to privacy-compliant first-party data for attribution, conflicting results reported by different platforms and analytics tools will be a challenge for marketers hoping to make informed decisions about media investments.

Advertisers are now challenged with unifying the data they can collect from all available disparate sources and extrapolating actionable insights, but who should they trust when Google Analytics (GA) and publisher reporting inevitably report wildly different results? The propensity for each platform to give itself more credit than it deserves is what moved us all away from last-click metrics in the first place — and into a decade of chasing the elusive holy grail of measurement promised by MTA.

For example, I worked with an e-commerce business last year that invested $1.6 million in Facebook advertising over the course of a month. Facebook claimed its ads were responsible for 50% of the brand’s total conversions for that month. GA, on the other hand, reported that Facebook drove less than 1% of total conversions. That is 140 versus 70,000 conversions attributed to Facebook. The reports will always be different, and this drastic discrepancy is not uncommon.

Which Report Is Correct?

The answer is somewhere in between. We’ll give the platforms the benefit of the doubt and say that neither is intentionally providing inaccurate results or deceptively inflating performance numbers. What they report is accurate, based on what they measure and how they measure it. Each report provides some level of insight marketers can use — but, on their own, neither can tell you exactly which media contributed to sales and by how much.

Channel-level reporting (e.g., Source-Medium reporting in GA or Channel Performance reporting in Adobe Analytics) offered by web analytics platforms are based on site-side tracking. Next to your own CRM system, they are the most accurate way to measure total conversions. GA is extremely effective for web analytics like understanding site performance and measuring total business impact, but site-side analytics are not ideal for understanding media performance at the channel level.

The breakdown in GA’s ability to accurately assign conversion credit comes down to referrer URLs, the link a user clicked on that sent them to your page before they made a purchase. GA uses these links to categorize conversions. But, if GA is unable to trace the link back to Facebook for some reason, it will tag it as organic search or something else that conveniently gives the credit to Google by default.

These miscategorizations wouldn’t be a huge deal if they were rare occasions, but this critical part of GA measurement fails often for a multitude of reasons. UTM codes, unique snippets of text added to the end of URL to indicate the source, are complicated to write and track and they also break all the time. Even when the UTM is pristine, other factors can impact GA’s reporting. If a visitor lands on your page and immediately gets redirected or clicks away before the GA tag gets loaded, you lose the credit. If someone comes to your page from Facebook, leaves and then comes back through search to make a purchase, Google gets the credit. The more channels you add to the mix, the less effective GA is at reporting at the channel level.

On the other side of the first-party coin, Facebook Ads Manager and other attribution tools provided by publishers rely on completely different datasets for measurement. Facebook can tell you how many people saw your ad on its platform and how many of them resulted in conversions. This data is useful for determining whether your targeting is working or your creative is landing. What it cannot tell you is how many of those people who bought something would have made the purchase anyway even if they they had not seen your ad — so Facebook claims credit for all of them.

How To Manage Conflicting Reports

To reconcile the reporting discrepancies and determine the true contribution of different media channels and tactics to conversions, advertisers can compare the results of incrementality testing run on a platform like Facebook with total conversion data from GA.

Incrementality measurement uses experiments to measure how many people on Facebook would have converted regardless of whether they encountered your ad. By withholding the ad from a statistically significant portion of your target audience and calculating what percentage of them still made a purchase, you can determine what percentage of total conversions to credit to the ad. Hold that up against the overall business impact reported on GA, and you can tie your Facebook investments directly to revenue.

While I used Facebook and Google for the example above, incrementality testing can be applied to just about any marketing channel or tactic. Because each platform operates on a slightly different framework and provides access to different types of datasets, experiments need to be customized and nuanced for each specific environment. It can be a grueling process, but once you have ongoing experimentation in place on your top platforms, incrementality results from different sources can provide cross-channel insights for investment decisions and more.

It can be disconcerting to receive conflicting performance reports from various sources, but I am here to tell you that your site-side analytics and platform metrics will never match up. What’s important to understand is what each platform actually measures and how you should and should not use the data. Layering on incrementality testing can then tease out additional actionable insights for making the most informed decisions about your media investments.

 

The impending demise of user-level tracking across platforms and devices will make complex advertising measurement models like multi-touch attribution (MTA) not just difficult but impossible.

The impending demise of user-level tracking across platforms and devices will make complex advertising measurement models like multi-touch attribution (MTA) not just difficult but impossible. As these increasing limitations push advertisers back to privacy-compliant first-party data for attribution, conflicting results reported by different platforms and analytics tools will be a challenge for marketers hoping to make informed decisions about media investments.

Advertisers are now challenged with unifying the data they can collect from all available disparate sources and extrapolating actionable insights, but who should they trust when Google Analytics (GA) and publisher reporting inevitably report wildly different results? The propensity for each platform to give itself more credit than it deserves is what moved us all away from last-click metrics in the first place — and into a decade of chasing the elusive holy grail of measurement promised by MTA.

For example, I worked with an e-commerce business last year that invested $1.6 million in Facebook advertising over the course of a month. Facebook claimed its ads were responsible for 50% of the brand’s total conversions for that month. GA, on the other hand, reported that Facebook drove less than 1% of total conversions. That is 140 versus 70,000 conversions attributed to Facebook. The reports will always be different, and this drastic discrepancy is not uncommon.

Which Report Is Correct?

The answer is somewhere in between. We’ll give the platforms the benefit of the doubt and say that neither is intentionally providing inaccurate results or deceptively inflating performance numbers. What they report is accurate, based on what they measure and how they measure it. Each report provides some level of insight marketers can use — but, on their own, neither can tell you exactly which media contributed to sales and by how much.

Channel-level reporting (e.g., Source-Medium reporting in GA or Channel Performance reporting in Adobe Analytics) offered by web analytics platforms are based on site-side tracking. Next to your own CRM system, they are the most accurate way to measure total conversions. GA is extremely effective for web analytics like understanding site performance and measuring total business impact, but site-side analytics are not ideal for understanding media performance at the channel level.

The breakdown in GA’s ability to accurately assign conversion credit comes down to referrer URLs, the link a user clicked on that sent them to your page before they made a purchase. GA uses these links to categorize conversions. But, if GA is unable to trace the link back to Facebook for some reason, it will tag it as organic search or something else that conveniently gives the credit to Google by default.

These miscategorizations wouldn’t be a huge deal if they were rare occasions, but this critical part of GA measurement fails often for a multitude of reasons. UTM codes, unique snippets of text added to the end of URL to indicate the source, are complicated to write and track and they also break all the time. Even when the UTM is pristine, other factors can impact GA’s reporting. If a visitor lands on your page and immediately gets redirected or clicks away before the GA tag gets loaded, you lose the credit. If someone comes to your page from Facebook, leaves and then comes back through search to make a purchase, Google gets the credit. The more channels you add to the mix, the less effective GA is at reporting at the channel level.

On the other side of the first-party coin, Facebook Ads Manager and other attribution tools provided by publishers rely on completely different datasets for measurement. Facebook can tell you how many people saw your ad on its platform and how many of them resulted in conversions. This data is useful for determining whether your targeting is working or your creative is landing. What it cannot tell you is how many of those people who bought something would have made the purchase anyway even if they they had not seen your ad — so Facebook claims credit for all of them.

How To Manage Conflicting Reports

To reconcile the reporting discrepancies and determine the true contribution of different media channels and tactics to conversions, advertisers can compare the results of incrementality testing run on a platform like Facebook with total conversion data from GA.

Incrementality measurement uses experiments to measure how many people on Facebook would have converted regardless of whether they encountered your ad. By withholding the ad from a statistically significant portion of your target audience and calculating what percentage of them still made a purchase, you can determine what percentage of total conversions to credit to the ad. Hold that up against the overall business impact reported on GA, and you can tie your Facebook investments directly to revenue.

While I used Facebook and Google for the example above, incrementality testing can be applied to just about any marketing channel or tactic. Because each platform operates on a slightly different framework and provides access to different types of datasets, experiments need to be customized and nuanced for each specific environment. It can be a grueling process, but once you have ongoing experimentation in place on your top platforms, incrementality results from different sources can provide cross-channel insights for investment decisions and more.

It can be disconcerting to receive conflicting performance reports from various sources, but I am here to tell you that your site-side analytics and platform metrics will never match up. What’s important to understand is what each platform actually measures and how you should and should not use the data. Layering on incrementality testing can then tease out additional actionable insights for making the most informed decisions about your media investments.

Original Publisher
Forbes

 

The impending demise of user-level tracking across platforms and devices will make complex advertising measurement models like multi-touch attribution (MTA) not just difficult but impossible.

Press    Guest Post: Why Marketing Attribution Has Failed in the Boardroom

Original Publisher
Forbes

By Trevor Testwuide, CEO and Co-Founder at Measured, helping brands grow by measuring incremental media contribution to desired performance results.

At our company, which measures incremental media contribution, many of our direct-to-consumer and retail-brand clients were connected to us by private equity or venture capital investors in search of an effective way for marketers to communicate performance to financial stakeholders.

It’s a common tale: In an effort to prove their worth, marketers scramble to produce performance reports using complex measurement and attribution models, while investors just want a straightforward and credible understanding of how marketing spend ties to company financials.

For the past decade, multitouch attribution promised to deliver a way for marketers to speak the language of finance. Many people, myself included, invested a great deal into building systems to calculate what percentage of a sale could be attributed to each marketing touchpoint, only to fail at earning the necessary trust in the boardroom.

In my experience, finance professionals tend to go through a somewhat predictable thought process when being presented with performance data, and marketers should focus on answering these questions when making their case.

1. Do I trust the information being presented?

Multitouch attribution never got past this first question. For myriad reasons, multitouch attribution has struggled to live up to its promises. It can be expensive and difficult to implement. And, perhaps most importantly, I don’t believe it fully earned the trust of finance or marketing. Multitouch attribution was built on top of an idealistic design for data collection requiring user-level event and identity tracking across all platforms. This dream resulted in a herculean effort of data collection, mapping and reconciliation that never landed. Further, scientists applying the complex statistical modeling did so with no transparency.

Why would any investor, let alone marketer, trust that? Frustrated voices of marketers and many heated boardroom discussions led me to the conclusion that multitouch attribution was the wrong approach in the first place. Rather than attempting the messy task of attributing a percentage of sales to a marketing touchpoint, I recommend figuring out how each marketing tactic incrementally contributes to revenue. In my experience running a company that specializes in incrementality measurement, I’ve found that is a language that speaks to finance.

2. How did each media tactic truly contribute to the business?

Recent privacy legislation and the crackdown on cookies and third-party data collection have finally pushed marketers to look for a better option than multi-touch attribution. As a result, I’ve seen that scientific methods like random control testing and experimentation have been gaining traction as methods to understand the media’s incremental contribution. For example, Facebook reported, “Industry leaders such as Netflix, Airbnb, eBay and Booking.com say they’ve seen success using incrementality measurement in today’s rapidly changing advertising landscape.”

Incrementality measurement uses statistically sound experiments to reveal the impact of each marketing channel or tactic on desired business outcomes. The results enable marketers to demonstrate in a very clear way how money invested in paid media resulted in an increase in profit.

3. How far can I push my best-performing media?

The natural follow-up questions to proven return on investment are whether you will get the same results by investing more money and when the law of diminishing returns will kick in. The beauty of incrementality testing is that different scenarios can be tested to determine at what point increasing spend would be wasting money.

I’ve seen many brands run a series of experiments to learn they are significantly oversaturating a particular channel or campaign. It makes sense to keep adding money to an effort that keeps paying out, but knowing when it is financially beneficial to pull back and spend it elsewhere is the key to growth.

4. Can we grow profitably through paid media?

This is the critical question and understandably the most difficult to answer. Yes, through incrementality testing and scenario planning marketers can identify proper budget allocation across multiple channels and tactics, optimized for incremental net profit. What makes this effort exponentially more complicated is that nothing is static. Platforms change, regulations change and markets change, and if 2020 has shown us anything, it is that the whole world can change due to a single event.

Data from a point in time one month ago cannot provide actionable insight for tomorrow if consumer behaviors are shifting day-to-day. Marketers need accurate, consistently updated data to be agile and make profitable decisions. Experiments need to be designed in a way they can be continuously redeployed for current, ongoing insight, not just for the quarterly boardroom meeting.

Finding Success With Incrementality Measurement

For executives and investors who want to help marketers gain trust with stakeholders, it’s important to tell them exactly what you need and align on a clear learning agenda. Rather than resign to the fact that marketing will always be somewhat of an enigma, push back on the bloated reports with metrics pointing different directions and ask for a clear delineation of how marketing investments impact business outcomes.

For marketers who may have been burned before and are wary of trusting yet another method of media measurement, returning to a clear and transparent methodology, rooted in science, will be a welcome transition. The reports publishers deliver, based on data from the platforms they own, contain the most accurate performance tracking available. Start there, then build incrementality testing on top of first-party IDs from the platforms to uncover more valuable and trusted insights for reporting and optimizing results.

If finance and marketing can agree on a methodology and currency for a demonstration of trusted media performance, brands will have a clear path forward for growth through paid media.

 

Experiments need to be designed in a way they can be continuously redeployed for current, ongoing insight, not just for the quarterly boardroom meeting.

By Trevor Testwuide, CEO and Co-Founder at Measured, helping brands grow by measuring incremental media contribution to desired performance results.

At our company, which measures incremental media contribution, many of our direct-to-consumer and retail-brand clients were connected to us by private equity or venture capital investors in search of an effective way for marketers to communicate performance to financial stakeholders.

It’s a common tale: In an effort to prove their worth, marketers scramble to produce performance reports using complex measurement and attribution models, while investors just want a straightforward and credible understanding of how marketing spend ties to company financials.

For the past decade, multitouch attribution promised to deliver a way for marketers to speak the language of finance. Many people, myself included, invested a great deal into building systems to calculate what percentage of a sale could be attributed to each marketing touchpoint, only to fail at earning the necessary trust in the boardroom.

In my experience, finance professionals tend to go through a somewhat predictable thought process when being presented with performance data, and marketers should focus on answering these questions when making their case.

1. Do I trust the information being presented?

Multitouch attribution never got past this first question. For myriad reasons, multitouch attribution has struggled to live up to its promises. It can be expensive and difficult to implement. And, perhaps most importantly, I don’t believe it fully earned the trust of finance or marketing. Multitouch attribution was built on top of an idealistic design for data collection requiring user-level event and identity tracking across all platforms. This dream resulted in a herculean effort of data collection, mapping and reconciliation that never landed. Further, scientists applying the complex statistical modeling did so with no transparency.

Why would any investor, let alone marketer, trust that? Frustrated voices of marketers and many heated boardroom discussions led me to the conclusion that multitouch attribution was the wrong approach in the first place. Rather than attempting the messy task of attributing a percentage of sales to a marketing touchpoint, I recommend figuring out how each marketing tactic incrementally contributes to revenue. In my experience running a company that specializes in incrementality measurement, I’ve found that is a language that speaks to finance.

2. How did each media tactic truly contribute to the business?

Recent privacy legislation and the crackdown on cookies and third-party data collection have finally pushed marketers to look for a better option than multi-touch attribution. As a result, I’ve seen that scientific methods like random control testing and experimentation have been gaining traction as methods to understand the media’s incremental contribution. For example, Facebook reported, “Industry leaders such as Netflix, Airbnb, eBay and Booking.com say they’ve seen success using incrementality measurement in today’s rapidly changing advertising landscape.”

Incrementality measurement uses statistically sound experiments to reveal the impact of each marketing channel or tactic on desired business outcomes. The results enable marketers to demonstrate in a very clear way how money invested in paid media resulted in an increase in profit.

3. How far can I push my best-performing media?

The natural follow-up questions to proven return on investment are whether you will get the same results by investing more money and when the law of diminishing returns will kick in. The beauty of incrementality testing is that different scenarios can be tested to determine at what point increasing spend would be wasting money.

I’ve seen many brands run a series of experiments to learn they are significantly oversaturating a particular channel or campaign. It makes sense to keep adding money to an effort that keeps paying out, but knowing when it is financially beneficial to pull back and spend it elsewhere is the key to growth.

4. Can we grow profitably through paid media?

This is the critical question and understandably the most difficult to answer. Yes, through incrementality testing and scenario planning marketers can identify proper budget allocation across multiple channels and tactics, optimized for incremental net profit. What makes this effort exponentially more complicated is that nothing is static. Platforms change, regulations change and markets change, and if 2020 has shown us anything, it is that the whole world can change due to a single event.

Data from a point in time one month ago cannot provide actionable insight for tomorrow if consumer behaviors are shifting day-to-day. Marketers need accurate, consistently updated data to be agile and make profitable decisions. Experiments need to be designed in a way they can be continuously redeployed for current, ongoing insight, not just for the quarterly boardroom meeting.

Finding Success With Incrementality Measurement

For executives and investors who want to help marketers gain trust with stakeholders, it’s important to tell them exactly what you need and align on a clear learning agenda. Rather than resign to the fact that marketing will always be somewhat of an enigma, push back on the bloated reports with metrics pointing different directions and ask for a clear delineation of how marketing investments impact business outcomes.

For marketers who may have been burned before and are wary of trusting yet another method of media measurement, returning to a clear and transparent methodology, rooted in science, will be a welcome transition. The reports publishers deliver, based on data from the platforms they own, contain the most accurate performance tracking available. Start there, then build incrementality testing on top of first-party IDs from the platforms to uncover more valuable and trusted insights for reporting and optimizing results.

If finance and marketing can agree on a methodology and currency for a demonstration of trusted media performance, brands will have a clear path forward for growth through paid media.

Original Publisher
Forbes

 

Experiments need to be designed in a way they can be continuously redeployed for current, ongoing insight, not just for the quarterly boardroom meeting.