Measurement Roadmap

Measurement Roadmap

Your step-by-step path to marketing measurement maturity — from basic tracking to unified measurement

Marketing measurement is not a one-time setup — it is a journey. Each level builds on the previous one, unlocking deeper insights and more confident budget decisions.

Why a Roadmap?

Most companies jump straight to advanced tools without building the foundation first. The result: unreliable data, conflicting metrics, and wasted budget. This roadmap ensures you build measurement maturity in the right order.

The Five Levels

LevelNameWhat You UnlockMinimum Requirement
1Pixel TrackingEvent data, pageviews, conversionsAdverfly Pixel installed
2Multi-Touch Attribution (MTA)Cross-channel journey insights30+ days of pixel data
3Marketing Mix Modeling (MMM)Causal ROI, budget optimization6+ months of spend data
4Incrementality TestingExperimental proof of ad impactActive campaigns in 3+ regions
5Unified MeasurementTriangulated truth, maximum confidenceAll previous levels active

Apps Unlocked per Level

Each level does not just unlock a measurement method — it also activates or enhances specific Adverfly apps:

LevelApps Unlocked
1Summary, Insights, Creatives, Journeys, Surveys, Health Center, Map, Hourly Breakdown, UTM Builder, Data Bridge, Integrations, Labels, Automations
2Custom Attribution, LTVs, Customer Segments, Correlations, Data Exports, Custom Reports
3Marketing Mix Modeling, AI Forecasting, Marketing Angles, Ad Fatigue, Logbook, Recommendations
4Geo Testing
5Unified Dashboard (coming soon), Loyalty Program (coming soon)

Some apps work from Level 1 but become significantly more powerful at higher levels. For example, Insights shows basic metrics at Level 1 but displays unified ROI with confidence scores at Level 5.

How to Use This Guide

Work through the levels in order. Each page explains:

  • What it is — the concept in plain language
  • Why it matters — the business impact
  • How to set it up — step-by-step in Adverfly
  • Apps unlocked — which Adverfly tools become available at this stage
  • When to move on — signals that you are ready for the next level

Your current level depends on what data you already have. Most new customers start at Level 1 and reach Level 3 within their first quarter.

Why Marketing Measurement?

The problem every advertiser faces — and why one method is not enough

The Core Problem

You spend money on ads. Some of that money drives revenue. Some of it is wasted. The question is: how much of each?

This sounds simple, but it is one of the hardest problems in marketing. Here is why:

The Attribution Gap

A customer sees your Instagram ad on Monday, clicks a Google ad on Wednesday, and buys on Friday after typing your URL directly. Who gets the credit?

  • Google claims the sale (last click)
  • Meta claims the sale (view-through)
  • Your analytics says it was direct traffic
  • The customer says a friend recommended them

All four are partially right. None tells the full story. This is the attribution gap — the difference between what platforms report and what actually drives your revenue.

Why One Method Is Not Enough

MethodStrengthBlind Spot
Platform reportingReal-time, granularEach platform over-counts, no cross-platform view
Pixel trackingFirst-party data, cross-platformCannot track offline, limited by cookies and privacy
SurveysCaptures awareness channels (TV, podcasts, word-of-mouth)Subjective, recall bias, small sample size
Attribution (MTA)Multi-touch journey viewCookie-dependent, correlation not causation
Marketing Mix ModelingPrivacy-proof, includes offlineChannel-level only, needs 6+ months data
Incrementality testingProves causationExpensive, slow, tests one channel at a time

No single method covers everything. That is why you need multiple methods — and this roadmap shows you how to build them up, step by step.

The Business Impact

Companies that invest in measurement maturity see concrete results:

  • Stop wasting 20-30% of ad spend on over-saturated channels
  • Discover hidden winners — channels that MTA undervalues but MMM reveals as high-ROI
  • Defend budget decisions with experimental proof, not platform self-reporting
  • React faster — real-time MTA signals combined with causal MMM insights

How the Methods Work Together

Think of measurement methods as lenses on the same reality:

  • Pixel Tracking = the raw data (what happened)
  • Surveys = the customer's perspective (what they remember)
  • MTA = the microscope (detailed journey view, but narrow)
  • MMM = the telescope (big-picture causal view, but less granular)
  • Incrementality = the experiment (proof, but expensive and slow)
  • Unified Measurement = all lenses combined into one picture

Each lens reveals something the others miss. The roadmap guides you through activating them in the right order.

Measurement Methods Explained

Each measurement method in plain language — what it is, how it works, and when to use it

Pixel Tracking

In one sentence: A small code snippet on your website that records every user action.

How it works: When someone visits your site, the pixel fires and sends data to Adverfly — which page they viewed, what they clicked, whether they bought something, and where they came from (UTM parameters).

Analogy: Think of it as a security camera for your website. It records everything, but it does not interpret what it sees. That is the job of the methods above.

You need this for: Everything else. Without pixel data, no other method works.


Surveys (Self-Reported Attribution)

In one sentence: Ask your customers directly how they found you.

How it works: After a purchase, a short survey appears: "How did you hear about us?" The customer selects from options like Instagram, Google, a friend, a podcast, etc.

Analogy: Instead of analyzing footprints, you ask the person: "How did you get here?" Simple, but surprisingly powerful — especially for channels that leave no digital footprint.

Why it matters: Surveys capture what tracking cannot — word-of-mouth, podcast mentions, offline ads, brand awareness. A customer might say "I saw you on a podcast" even though their last click was Google. Without surveys, you would credit Google and never know about the podcast.

Limitations: People forget, misattribute, or pick the most recent thing. Survey data is directional, not precise. That is why it complements — but does not replace — quantitative methods.


Multi-Touch Attribution (MTA)

In one sentence: Distributes conversion credit across every touchpoint in the customer journey.

How it works: Instead of giving 100% credit to the last click, MTA looks at the full sequence of interactions — first ad view, email click, retargeting ad, final purchase — and distributes credit based on a model (equal, position-based, or data-driven).

Analogy: In football, last-click attribution credits only the goal scorer. MTA also credits the midfielder who made the pass and the defender who started the play.

Why it matters: Without MTA, upper-funnel channels (brand awareness, prospecting) look terrible because they rarely get the last click. MTA reveals their true contribution to the journey.

Limitations: MTA only sees digital touchpoints tracked by your pixel. It misses offline channels, struggles with cross-device tracking, and shows correlation — not proof that the ad caused the purchase.


Marketing Mix Modeling (MMM)

In one sentence: A statistical model that measures how each marketing channel drives revenue over time.

How it works: MMM takes your daily spend per channel and your daily revenue, then uses regression analysis to isolate how much revenue each channel causes — controlling for seasonality, trends, promotions, and external factors like weather.

Analogy: MTA watches individual customers through a microscope. MMM flies above in a helicopter and sees the big picture — "When we spent more on TV, revenue went up two weeks later. When we cut Meta, nothing changed."

Why it matters: MMM is privacy-proof (no cookies needed), captures offline channels, and estimates causal impact — not just correlation. It also reveals saturation points (when spending more stops helping) and carryover effects (how long an ad's impact lasts).

Limitations: MMM works at the channel level, not campaign or ad level. It needs 6+ months of historical data. And without experimental validation, it is still a model — an informed estimate, not proof.


Incrementality Testing

In one sentence: Turn off ads in some regions and measure what happens to prove causation.

How it works: You select treatment regions (ads keep running) and holdout regions (ads are paused). After 2-4 weeks, you compare revenue between the two groups. The difference is your incremental lift — the revenue that would not have happened without ads.

Analogy: It is a clinical trial for advertising. Treatment group gets the "medicine" (ads), control group gets a "placebo" (no ads). The difference proves whether the medicine works.

Why it matters: This is the gold standard of measurement. It is the only method that proves your ads caused conversions — not just that they were correlated with them. Executives trust experiments more than models.

Limitations: Experiments are expensive (you pause real ads), slow (2-4 weeks per test), and test only one channel at a time. You cannot experiment on everything — that is why you combine it with MMM and MTA.


Unified Measurement

In one sentence: Combine all methods into one triangulated answer with a confidence score.

How it works: Adverfly takes the ROI estimates from MTA, MMM, and incrementality tests and blends them using confidence-weighted averages. Methods with stronger causal evidence (experiments) get higher weight. The result is a single unified ROI per channel with a 0-1 confidence score.

Analogy: One witness says the suspect was at the scene. A second witness agrees. DNA evidence confirms it. Each piece of evidence strengthens the case. Unified Measurement does the same — the more methods agree, the higher the confidence.

Why it matters: No more arguing about which report is "right." Marketing, finance, and the C-suite see one number they can trust. Budget decisions become defensible, and the system gets smarter over time as you run more experiments.


Quick Comparison

PixelSurveysMTAMMMIncrementalityUnified
Data typeEventsResponsesJourneysAggregatedExperimentalCombined
GranularityUser-levelUser-levelUser-levelChannel-levelRegion-levelChannel-level
Captures offlineNoYesNoYesYesYes
Privacy-proofNoYesNoYesYesYes
Proves causationNoNoNoPartiallyYesYes
Setup timeHoursDaysDaysWeeksWeeksOngoing
Data neededImmediateImmediate30+ days6+ monthsActive campaignsAll methods

Level 1: Pixel Tracking

The foundation — capture every user interaction on your website

What It Is

The Adverfly Pixel is a lightweight JavaScript snippet that records user interactions on your website — pageviews, button clicks, add-to-carts, and purchases. This raw event data is the foundation for everything that follows.

Without the pixel, there is nothing to attribute, model, or test.

Why It Matters

  • Visibility: See exactly how users interact with your site after clicking an ad
  • Conversion tracking: Know which campaigns drive actual revenue, not just clicks
  • Data foundation: Every advanced measurement method (MTA, MMM, Geo-Tests) relies on accurate event data

How to Set It Up in Adverfly

Step 1: Install the Pixel

Add the Adverfly pixel to your website. See the Pixel SDK documentation for detailed installation instructions.

<script>
  !function(){var a=window.adverfly=window.adverfly||[];
  /* ... pixel snippet ... */
  }();
  advPxl("init", YOUR_STORE_ID);
</script>

Step 2: Configure Conversion Events

Track purchases and other key events:

advPxl("conversion", "purchase", {
  transaction_id: "ORDER-123",
  transaction_gross_revenue: 4900, // in cents
  transaction_currency: "EUR",
});

Step 3: Verify Data Flow

  1. Open Health Center in Adverfly
  2. Check that events are appearing in real time
  3. Verify that conversion values match your checkout system

Apps Unlocked at Level 1

Once your pixel is live and data is flowing, these apps become available:

AppWhat It Does
SummaryOverview dashboard with key performance metrics across all channels
InsightsBreakdown analysis with filters, tree data, and custom columns
CreativesAnalyze creative asset performance, AI-powered labeling, and previews
JourneysVisualize customer paths across touchpoints and conversion funnels
SurveysCreate post-purchase surveys to capture qualitative feedback (e.g., "How did you hear about us?")
Health CenterMonitor pixel health, data connections, and integration status
MapGeographic visualization of your marketing performance by region
Hourly BreakdownAnalyze performance patterns by hour of day
UTM BuilderBuild and test tracking URLs with UTM and Adverfly parameters
Data BridgeSync first-party pixel data back to Meta, Google, TikTok, and Snapchat
IntegrationsConnect ad platforms, analytics tools, and data sources
LabelsAI-powered creative labeling — automatically classify assets by format, angle, tone, and more
AutomationsOverview of all system schedules, background tasks, and agent automations
NewsIndustry news and trends relevant to your market

Surveys — A Special Note

Surveys deserve extra attention at Level 1. Post-purchase surveys ("How did you hear about us?") capture self-reported attribution — a qualitative signal that complements your quantitative pixel data. This is especially valuable for channels that are hard to track digitally (podcasts, word-of-mouth, influencers).

Set up surveys early — the data compounds over time and becomes invaluable when you reach Level 3 (MMM).

Checklist Before Moving On

  • Pixel fires on every page of your website
  • Purchase/conversion events include revenue and transaction ID
  • At least 7 days of clean data is flowing
  • No duplicate events or missing pages
  • UTM parameters are consistent across campaigns

When to Move to Level 2

You are ready for Multi-Touch Attribution when:

  • You have 30+ days of continuous pixel data
  • You are running campaigns on 2+ channels (e.g., Meta + Google)
  • You want to understand how channels work together, not just last-click

Level 2: Multi-Touch Attribution (MTA)

Understand the full customer journey across all touchpoints

What It Is

Multi-Touch Attribution assigns credit to every touchpoint a user interacts with before converting — not just the last click. Instead of giving 100% credit to the final ad, MTA distributes value across the entire journey.

Why It Matters

  • Beyond last-click: Discover that your Meta prospecting campaigns initiate journeys, even if Google converts them
  • Channel synergies: See which channels assist each other vs. cannibalize
  • Smarter budgets: Stop cutting "low-performing" channels that actually drive awareness

Attribution Models in Adverfly

ModelHow It WorksBest For
Last Click100% credit to final touchpointBaseline comparison
First Click100% credit to first touchpointUnderstanding awareness drivers
LinearEqual credit across all touchpointsBalanced view
U-Shaped40% first, 40% last, 20% middleValuing both discovery and conversion
Total ImpactCombines click and impression signals to measure incremental impactMost comprehensive view

How to Set It Up in Adverfly

Step 1: Ensure Data Quality

Before enabling MTA, verify in Health Center:

  • Events are flowing from all channels
  • UTM parameters are correctly configured
  • No significant data gaps in the last 30 days

Step 2: Configure Attribution Settings

  1. Navigate to Attribution Settings in your workspace
  2. Select your preferred attribution model (start with Linear for a balanced view)
  3. Set your lookback window (default: 7 days)
  4. Define which conversion events to attribute

Step 3: Analyze Customer Journeys

  1. Open Journeys to see multi-touch paths
  2. Use Insights with attribution breakdown to compare channel performance
  3. Compare models side-by-side to understand each channel's role

Key Metrics Unlocked

  • Attributed Revenue — revenue credited to each channel based on the model
  • Assisted Conversions — conversions where a channel appeared but was not last-click
  • Path Length — average number of touchpoints before conversion
  • Time to Conversion — how long the typical purchase journey takes

Apps Unlocked at Level 2

With attribution data flowing, these additional apps become available or significantly enhanced:

AppWhat It Does
Custom AttributionConfigure and compare attribution models (Last Click, First Click, Linear, U-Shaped, Total Impact)
LTVsAnalyze customer lifetime value segmented by acquisition channel and campaign
Customer SegmentsSegment customers by behavior, value tier, and acquisition source
CorrelationsVisualize how creative launches and scaling events correlate with performance shifts
Data ExportsGenerate API keys to access attributed data programmatically via the REST API
Custom ReportsBuild custom queries combining attribution data with other dimensions

LTVs and Customer Segments

These two apps become particularly powerful at Level 2. With attribution data, you can answer questions like:

  • "Which channel brings the highest-LTV customers?"
  • "Do Meta-acquired customers have different retention than Google-acquired ones?"
  • "Which campaigns attract one-time buyers vs. repeat customers?"

This insight directly informs your budget allocation — even before MMM.

Limitations of MTA

MTA is powerful but has blind spots:

  • Cookie-dependent: Cannot track users across devices or after cookie deletion
  • Digital-only: Does not capture offline channels (TV, radio, OOH)
  • Correlation, not causation: Showing in a path does not prove the ad caused the conversion
  • Signal loss: iOS privacy changes and ad blockers reduce tracking coverage

These limitations are why you need Level 3 (MMM) and Level 4 (Incrementality Testing).

Checklist Before Moving On

  • Attribution model configured and running for 30+ days
  • Reviewed journey paths and identified key channel interactions
  • Compared at least 2 attribution models to understand channel roles
  • Identified questions MTA cannot answer (offline impact, true causality)

When to Move to Level 3

You are ready for Marketing Mix Modeling when:

  • You have 6+ months of historical spend data across channels
  • You want to understand offline and upper-funnel channel impact
  • You need causal ROI estimates, not just correlation-based attribution
  • Your monthly ad spend exceeds $10,000+

Level 3: Marketing Mix Modeling (MMM)

Statistical models that reveal the true causal impact of every marketing channel

What It Is

Marketing Mix Modeling is a statistical approach that analyzes the relationship between your marketing spend and business outcomes (revenue, conversions) over time. Unlike MTA, MMM does not rely on user-level tracking — it works with aggregated spend and outcome data.

Adverfly uses Bayesian MMM powered by PyMC Marketing, producing probability distributions instead of single-point estimates. This means every ROI number comes with a confidence range.

Why It Matters

  • Privacy-proof: Works without cookies, pixels, or user-level data
  • Full-funnel view: Captures offline channels (TV, radio, OOH) alongside digital
  • Causal estimates: Separates organic baseline from marketing-driven revenue
  • Budget optimization: Recommends optimal spend allocation across channels
  • Saturation curves: Shows the point of diminishing returns for each channel

How It Works

  1. Data input: Historical daily spend per channel + daily revenue/conversions
  2. Bayesian regression: The model learns how each channel's spend correlates with outcomes, controlling for trends, seasonality, and external factors
  3. Saturation modeling: Logistic curves capture diminishing returns — each additional dollar of spend yields less incremental revenue
  4. Adstock/carryover: Accounts for the delayed effect of advertising (e.g., a TV ad today still drives revenue next week)
  5. Output: Incremental revenue per channel, ROI with credible intervals, optimal budget allocation

How to Set It Up in Adverfly

Step 1: Connect Your Data Sources

  1. Ensure all ad platforms are connected (Meta, Google, TikTok, etc.)
  2. Revenue data flows from your pixel or is imported via CSV
  3. Verify at least 6 months of continuous spend + revenue data

Step 2: Trigger a Model Run

  1. Navigate to Marketing Mix Modeling in your workspace
  2. Click Run Model — the model trains on your historical data
  3. Training takes approximately 15-30 minutes
  4. You will be notified when results are ready

Step 3: Interpret Results

Key views in the MMM dashboard:

  • KPI Overview — Total incremental revenue, blended ROI, model accuracy (R², MAPE)
  • Bayesian Time Series — Daily revenue decomposition with 95% and 80% credible intervals
  • Efficiency Chart — ROI vs. marginal ROI per channel (with optional MTA prior comparison)
  • Saturation Curves — Current spend position on each channel's diminishing-returns curve
  • Incremental Waterfall — Daily contribution of each channel to total revenue
  • Budget Optimizer — AI-recommended budget reallocation for maximum incremental revenue

Step 4: Optimize Budget

  1. Review the Budget Optimizer recommendations
  2. Compare current vs. recommended allocation
  3. Identify channels that are over-saturated (spend beyond diminishing returns)
  4. Identify channels with untapped potential (spend below optimal)

Key Metrics Unlocked

  • Incremental Revenue — revenue caused by marketing (not organic)
  • ROI with Credible Intervals — e.g., "ROI: 3.2x (80% CI: 2.8x – 3.6x)"
  • Marginal ROI (mROI) — return on the next dollar spent per channel
  • Saturation Point — the spend level where returns flatten
  • Adstock Half-Life — how long a channel's advertising effect persists

Apps Unlocked at Level 3

With MMM running, these apps become available or gain new capabilities:

AppWhat It Does
Marketing Mix ModelingFull MMM dashboard with Bayesian time series, saturation curves, and budget optimizer
AI Forecasting12-week revenue forecasts with 80% credible interval bands, powered by MMM model data
Marketing AnglesAI-generated marketing angles informed by channel performance and creative effectiveness data
Ad FatigueMonitor saturation and creative fatigue — know when a channel hits diminishing returns
LogbookAudit trail of model runs, budget changes, and optimization decisions
RecommendationsAI-generated optimization suggestions based on MMM outputs — saturation alerts, budget reallocation, creative swaps

The MMM Ecosystem

At Level 3, the platform shifts from descriptive analytics to prescriptive intelligence. The MMM model acts as a backbone that enriches other apps:

  • Insights now shows incremental revenue alongside attributed revenue
  • Creatives can rank assets by their contribution to MMM-measured channel performance
  • Surveys self-reported attribution data can be compared against MMM-measured channel impact
  • Weather & News correlations are automatically controlled for in the model

Limitations of MMM

  • Granularity: Works at the channel level, not campaign or ad level
  • Historical bias: Model reflects past patterns — major strategy changes may not be captured
  • Data requirements: Needs 6+ months of data for reliable results
  • Validation: Without experiments, MMM is an estimate — not proof

This is why you need Level 4 (Incrementality Testing) to validate and calibrate MMM results.

Checklist Before Moving On

  • First model run completed with acceptable accuracy (R² > 0.85, MAPE < 15%)
  • Reviewed saturation curves and identified over/under-invested channels
  • Compared MMM ROI with MTA ROI — noted discrepancies
  • Identified 1-2 channels where you want experimental proof of impact

When to Move to Level 4

You are ready for Incrementality Testing when:

  • Your MMM is running and producing stable results
  • You have specific channels where you want experimental proof of ROI
  • You are spending in 3+ geographic regions (for geo-lift tests)
  • You want to calibrate your MMM with real experimental data

Level 4: Incrementality Testing

Run experiments to prove the true causal impact of your advertising

What It Is

Incrementality testing is the gold standard of marketing measurement. Instead of modeling or attributing, you run a controlled experiment: turn off ads in some regions (or for some users) and compare outcomes against regions where ads continue running.

The difference is your incremental lift — the revenue that would not have happened without advertising.

Why It Matters

  • Causal proof: The only method that proves ads caused conversions (not just correlated)
  • MMM calibration: Experimental results feed back into your MMM as Bayesian priors, making future models more accurate
  • Budget confidence: Know with certainty whether a channel is worth its spend
  • Executive trust: Experimental evidence is the most defensible measurement in the boardroom

Types of Incrementality Tests

Compare geographic regions with and without advertising:

AspectDetails
How it worksPause ads in selected "holdout" regions, keep running in "treatment" regions
DurationTypically 2-4 weeks
GranularityCity, state, DMA, or postal code level
PrivacyNo user-level data required
Best forProving channel-level impact

Conversion Lift Tests (Platform-Managed)

Platform-run holdout tests (Meta, Google):

AspectDetails
How it worksPlatform splits users into exposed vs. holdout groups
Duration1-4 weeks
GranularityUser level (managed by platform)
PrivacyPlatform handles identity
Best forQuick validation of a single platform's impact

How to Set It Up in Adverfly

Step 1: Find Test Candidates

  1. Navigate to Geo-Testing in your workspace
  2. The Candidate Finder automatically analyzes your pixel data to identify regions suitable for testing
  3. Review suggested treatment-control region pairs (matched on historical performance)

Step 2: Design the Experiment

  1. Click AI Test Designer to generate a test plan
  2. Select the channel to test (e.g., "Meta Prospecting")
  3. Choose treatment and holdout regions from the suggested pairs
  4. Set test duration (recommended: 3-4 weeks)
  5. Review the minimum detectable effect — ensure your test has enough statistical power

Step 3: Run the Test

  1. Pause ads in holdout regions on the ad platform
  2. Keep all other campaigns and settings unchanged
  3. Mark the test as "Active" in Adverfly
  4. Wait for the full test duration — do not stop early

Step 4: Analyze Results

After the test period:

  1. Adverfly calculates the incremental lift using Bayesian Difference-in-Differences
  2. Review the posterior distribution of the treatment effect
  3. Check the credible interval — if it excludes zero, the result is statistically significant
  4. See the calculated iROAS (incremental return on ad spend)

Step 5: Feed Back to MMM

When a geo-test produces a significant result:

  1. Adverfly automatically uses the measured lift as a Bayesian prior for that channel in your MMM
  2. Future model runs will converge toward the experimentally validated truth
  3. This is called Ground Truth Feedback — closing the loop between experiments and models

Apps Unlocked at Level 4

AppWhat It Does
Geo TestingFull geo-lift testing suite with AI Test Designer, Candidate Finder, and result analysis
RecommendationsAI-powered budget recommendations now backed by experimental evidence, not just models

How Experiments Enhance Existing Apps

  • Marketing Mix Modeling — experimental results feed back as Bayesian priors, improving future model accuracy
  • Insights — channels with geo-test data show higher confidence scores
  • AI Forecasting — forecasts calibrated against experimentally validated lift
  • Correlations — separate true causal impact from coincidental correlation

Key Metrics Unlocked

  • Incremental Lift (%) — percentage of conversions caused by ads
  • iROAS — incremental revenue divided by incremental ad spend
  • Posterior Probability — confidence that the effect is real (not noise)
  • Credible Interval — the range where the true effect most likely falls

Checklist Before Moving On

  • Completed at least 1 geo-lift test with a statistically significant result
  • iROAS calculated and compared to MMM ROI for the same channel
  • Ground Truth Feedback applied to MMM (Bayesian prior updated)
  • Identified next channels to test

When to Move to Level 5

You are ready for Unified Measurement when:

  • You have MTA running (Level 2)
  • You have MMM running with stable results (Level 3)
  • You have completed at least 1 incrementality test (Level 4)
  • You want a single source of truth that combines all methods

Level 5: Unified Measurement

Combine all measurement methods into a single, triangulated source of truth

Coming Soon — Level 5 describes the vision for unified measurement. The triangulation engine, unified ROI scores, and confidence-weighted blending are currently in development and not yet available in the platform.

What It Is

Unified Measurement is the final level — it combines Multi-Touch Attribution (MTA), Marketing Mix Modeling (MMM), and Incrementality Testing into a single, weighted view of marketing performance. Instead of three conflicting reports, you get one triangulated answer.

Why It Matters

  • No more conflicting numbers: MTA says one thing, MMM says another — Unified Measurement reconciles both
  • Maximum confidence: Each method's strengths compensate for the others' weaknesses
  • Defensible decisions: Budget recommendations backed by multiple measurement approaches
  • Continuous improvement: As you run more experiments, the system gets more accurate over time

How Triangulation Works

Adverfly uses confidence-weighted triangulation to blend measurement methods:

Weighting System

ScenarioWeightsConfidence Score
Geo-test + MMM + MTA50% Experiment, 30% MMM, 20% MTA0.95
MMM + MTA (no experiment)60% MMM, 40% MTA0.70
MTA only (fallback)100% MTA0.40

The weights reflect each method's causal reliability:

  • Experiments get the highest weight because they prove causation
  • MMM gets more weight than MTA because it controls for confounders
  • MTA provides real-time granularity but is correlation-based

Example: Meta Prospecting

MethodROI Estimate
MTA (Data-Driven)2.1x
MMM (Bayesian)3.4x
Geo-Lift Test3.1x
Unified ROI3.06x (confidence: 0.95)

The unified ROI of 3.06x is calculated as: 0.50 × 3.1 + 0.30 × 3.4 + 0.20 × 2.1 = 3.06

How to Set It Up in Adverfly

Step 1: Verify All Levels Are Active

Ensure you have completed the previous levels:

  • Level 1 — Pixel tracking with clean event data
  • Level 2 — MTA configured with 30+ days of data
  • Level 3 — MMM model running with acceptable accuracy
  • Level 4 — At least 1 completed incrementality test

Step 2: Review the Unified Dashboard

  1. Open the Insights dashboard
  2. Switch to the Unified attribution view
  3. Each channel shows its triangulated ROI with a confidence score
  4. Channels with geo-test data have the highest confidence scores

Step 3: Iterate and Improve

The system improves over time:

  1. Run more geo-tests — each experiment increases confidence for that channel
  2. Re-run MMM monthly — models improve with more data and experimental priors
  3. Monitor MTA quality — keep pixel data clean and UTM parameters consistent
  4. Test new channels — when you add a new channel, it starts at MTA-only (0.40 confidence) and graduates as you add MMM and experiments

The Measurement Flywheel

Unified Measurement creates a virtuous cycle:

  1. MTA provides real-time signal → identifies channels worth investigating
  2. MMM quantifies long-term impact → recommends budget shifts
  3. Experiments prove causation → calibrate MMM with ground truth
  4. Calibrated MMM produces better estimates → identifies next experiment
  5. Repeat — each cycle increases confidence across all channels

Apps at Full Power (Level 5)

At Level 5, every app in the platform operates with maximum intelligence:

AppEnhancement
AI ChatAsk questions across all measurement methods — "What is the true ROI of Meta?" returns a triangulated answer with confidence score
InsightsUnified ROI column with confidence indicators, method agreement badges
SummaryExecutive dashboard showing triangulated performance, not just last-click
AutomationsBudget rules based on unified ROI thresholds with confidence gates
RecommendationsRecommendations weighted by measurement confidence — experimental evidence ranks highest
Loyalty ProgramTie loyalty program effectiveness to incrementally measured customer acquisition
Data ExportsExport unified measurement data with confidence scores for external BI tools
Custom ReportsQuery across MTA, MMM, and experiment data in a single report

Everything Connected

At Level 5, the platform is no longer a collection of separate tools — it is a unified measurement system where:

  • Surveys validate what channels customers remember → compared against MMM and experiments
  • Creatives performance is measured by true incremental impact, not just clicks
  • LTVs reflect causally acquired customers, not just attributed ones
  • Forecasts are grounded in experimental truth, not just historical patterns

Key Metrics at Level 5

  • Unified ROI — triangulated return across all methods
  • Confidence Score — 0 to 1, how trustworthy the unified ROI is
  • Method Agreement — whether MTA, MMM, and experiments align (or conflict)
  • Coverage — percentage of spend covered by experimental validation

Ongoing Best Practices

  • Run at least 1 geo-test per quarter to keep experimental data fresh
  • Re-train MMM monthly to incorporate new data and priors
  • Review method agreement — if MTA and MMM diverge significantly, prioritize testing that channel
  • Keep pixel health high — MTA quality degrades if tracking breaks
  • Test new channels early — the sooner you experiment, the sooner you reach high confidence