Measurement Roadmap
Your step-by-step path to marketing measurement maturity — from basic tracking to unified measurement
Marketing measurement is not a one-time setup — it is a journey. Each level builds on the previous one, unlocking deeper insights and more confident budget decisions.
Why a Roadmap?
Most companies jump straight to advanced tools without building the foundation first. The result: unreliable data, conflicting metrics, and wasted budget. This roadmap ensures you build measurement maturity in the right order.
The Five Levels
| Level | Name | What You Unlock | Minimum Requirement |
|---|---|---|---|
| 1 | Pixel Tracking | Event data, pageviews, conversions | Adverfly Pixel installed |
| 2 | Multi-Touch Attribution (MTA) | Cross-channel journey insights | 30+ days of pixel data |
| 3 | Marketing Mix Modeling (MMM) | Causal ROI, budget optimization | 6+ months of spend data |
| 4 | Incrementality Testing | Experimental proof of ad impact | Active campaigns in 3+ regions |
| 5 | Unified Measurement | Triangulated truth, maximum confidence | All previous levels active |
Apps Unlocked per Level
Each level does not just unlock a measurement method — it also activates or enhances specific Adverfly apps:
| Level | Apps Unlocked |
|---|---|
| 1 | Summary, Insights, Creatives, Journeys, Surveys, Health Center, Map, Hourly Breakdown, UTM Builder, Data Bridge, Integrations, Labels, Automations |
| 2 | Custom Attribution, LTVs, Customer Segments, Correlations, Data Exports, Custom Reports |
| 3 | Marketing Mix Modeling, AI Forecasting, Marketing Angles, Ad Fatigue, Logbook, Recommendations |
| 4 | Geo Testing |
| 5 | Unified Dashboard (coming soon), Loyalty Program (coming soon) |
Some apps work from Level 1 but become significantly more powerful at higher levels. For example, Insights shows basic metrics at Level 1 but displays unified ROI with confidence scores at Level 5.
How to Use This Guide
Work through the levels in order. Each page explains:
- What it is — the concept in plain language
- Why it matters — the business impact
- How to set it up — step-by-step in Adverfly
- Apps unlocked — which Adverfly tools become available at this stage
- When to move on — signals that you are ready for the next level
Your current level depends on what data you already have. Most new customers start at Level 1 and reach Level 3 within their first quarter.
Why Marketing Measurement?
The problem every advertiser faces — and why one method is not enough
The Core Problem
You spend money on ads. Some of that money drives revenue. Some of it is wasted. The question is: how much of each?
This sounds simple, but it is one of the hardest problems in marketing. Here is why:
The Attribution Gap
A customer sees your Instagram ad on Monday, clicks a Google ad on Wednesday, and buys on Friday after typing your URL directly. Who gets the credit?
- Google claims the sale (last click)
- Meta claims the sale (view-through)
- Your analytics says it was direct traffic
- The customer says a friend recommended them
All four are partially right. None tells the full story. This is the attribution gap — the difference between what platforms report and what actually drives your revenue.
Why One Method Is Not Enough
| Method | Strength | Blind Spot |
|---|---|---|
| Platform reporting | Real-time, granular | Each platform over-counts, no cross-platform view |
| Pixel tracking | First-party data, cross-platform | Cannot track offline, limited by cookies and privacy |
| Surveys | Captures awareness channels (TV, podcasts, word-of-mouth) | Subjective, recall bias, small sample size |
| Attribution (MTA) | Multi-touch journey view | Cookie-dependent, correlation not causation |
| Marketing Mix Modeling | Privacy-proof, includes offline | Channel-level only, needs 6+ months data |
| Incrementality testing | Proves causation | Expensive, slow, tests one channel at a time |
No single method covers everything. That is why you need multiple methods — and this roadmap shows you how to build them up, step by step.
The Business Impact
Companies that invest in measurement maturity see concrete results:
- Stop wasting 20-30% of ad spend on over-saturated channels
- Discover hidden winners — channels that MTA undervalues but MMM reveals as high-ROI
- Defend budget decisions with experimental proof, not platform self-reporting
- React faster — real-time MTA signals combined with causal MMM insights
How the Methods Work Together
Think of measurement methods as lenses on the same reality:
- Pixel Tracking = the raw data (what happened)
- Surveys = the customer's perspective (what they remember)
- MTA = the microscope (detailed journey view, but narrow)
- MMM = the telescope (big-picture causal view, but less granular)
- Incrementality = the experiment (proof, but expensive and slow)
- Unified Measurement = all lenses combined into one picture
Each lens reveals something the others miss. The roadmap guides you through activating them in the right order.
Measurement Methods Explained
Each measurement method in plain language — what it is, how it works, and when to use it
Pixel Tracking
In one sentence: A small code snippet on your website that records every user action.
How it works: When someone visits your site, the pixel fires and sends data to Adverfly — which page they viewed, what they clicked, whether they bought something, and where they came from (UTM parameters).
Analogy: Think of it as a security camera for your website. It records everything, but it does not interpret what it sees. That is the job of the methods above.
You need this for: Everything else. Without pixel data, no other method works.
Surveys (Self-Reported Attribution)
In one sentence: Ask your customers directly how they found you.
How it works: After a purchase, a short survey appears: "How did you hear about us?" The customer selects from options like Instagram, Google, a friend, a podcast, etc.
Analogy: Instead of analyzing footprints, you ask the person: "How did you get here?" Simple, but surprisingly powerful — especially for channels that leave no digital footprint.
Why it matters: Surveys capture what tracking cannot — word-of-mouth, podcast mentions, offline ads, brand awareness. A customer might say "I saw you on a podcast" even though their last click was Google. Without surveys, you would credit Google and never know about the podcast.
Limitations: People forget, misattribute, or pick the most recent thing. Survey data is directional, not precise. That is why it complements — but does not replace — quantitative methods.
Multi-Touch Attribution (MTA)
In one sentence: Distributes conversion credit across every touchpoint in the customer journey.
How it works: Instead of giving 100% credit to the last click, MTA looks at the full sequence of interactions — first ad view, email click, retargeting ad, final purchase — and distributes credit based on a model (equal, position-based, or data-driven).
Analogy: In football, last-click attribution credits only the goal scorer. MTA also credits the midfielder who made the pass and the defender who started the play.
Why it matters: Without MTA, upper-funnel channels (brand awareness, prospecting) look terrible because they rarely get the last click. MTA reveals their true contribution to the journey.
Limitations: MTA only sees digital touchpoints tracked by your pixel. It misses offline channels, struggles with cross-device tracking, and shows correlation — not proof that the ad caused the purchase.
Marketing Mix Modeling (MMM)
In one sentence: A statistical model that measures how each marketing channel drives revenue over time.
How it works: MMM takes your daily spend per channel and your daily revenue, then uses regression analysis to isolate how much revenue each channel causes — controlling for seasonality, trends, promotions, and external factors like weather.
Analogy: MTA watches individual customers through a microscope. MMM flies above in a helicopter and sees the big picture — "When we spent more on TV, revenue went up two weeks later. When we cut Meta, nothing changed."
Why it matters: MMM is privacy-proof (no cookies needed), captures offline channels, and estimates causal impact — not just correlation. It also reveals saturation points (when spending more stops helping) and carryover effects (how long an ad's impact lasts).
Limitations: MMM works at the channel level, not campaign or ad level. It needs 6+ months of historical data. And without experimental validation, it is still a model — an informed estimate, not proof.
Incrementality Testing
In one sentence: Turn off ads in some regions and measure what happens to prove causation.
How it works: You select treatment regions (ads keep running) and holdout regions (ads are paused). After 2-4 weeks, you compare revenue between the two groups. The difference is your incremental lift — the revenue that would not have happened without ads.
Analogy: It is a clinical trial for advertising. Treatment group gets the "medicine" (ads), control group gets a "placebo" (no ads). The difference proves whether the medicine works.
Why it matters: This is the gold standard of measurement. It is the only method that proves your ads caused conversions — not just that they were correlated with them. Executives trust experiments more than models.
Limitations: Experiments are expensive (you pause real ads), slow (2-4 weeks per test), and test only one channel at a time. You cannot experiment on everything — that is why you combine it with MMM and MTA.
Unified Measurement
In one sentence: Combine all methods into one triangulated answer with a confidence score.
How it works: Adverfly takes the ROI estimates from MTA, MMM, and incrementality tests and blends them using confidence-weighted averages. Methods with stronger causal evidence (experiments) get higher weight. The result is a single unified ROI per channel with a 0-1 confidence score.
Analogy: One witness says the suspect was at the scene. A second witness agrees. DNA evidence confirms it. Each piece of evidence strengthens the case. Unified Measurement does the same — the more methods agree, the higher the confidence.
Why it matters: No more arguing about which report is "right." Marketing, finance, and the C-suite see one number they can trust. Budget decisions become defensible, and the system gets smarter over time as you run more experiments.
Quick Comparison
| Pixel | Surveys | MTA | MMM | Incrementality | Unified | |
|---|---|---|---|---|---|---|
| Data type | Events | Responses | Journeys | Aggregated | Experimental | Combined |
| Granularity | User-level | User-level | User-level | Channel-level | Region-level | Channel-level |
| Captures offline | No | Yes | No | Yes | Yes | Yes |
| Privacy-proof | No | Yes | No | Yes | Yes | Yes |
| Proves causation | No | No | No | Partially | Yes | Yes |
| Setup time | Hours | Days | Days | Weeks | Weeks | Ongoing |
| Data needed | Immediate | Immediate | 30+ days | 6+ months | Active campaigns | All methods |
Level 1: Pixel Tracking
The foundation — capture every user interaction on your website
What It Is
The Adverfly Pixel is a lightweight JavaScript snippet that records user interactions on your website — pageviews, button clicks, add-to-carts, and purchases. This raw event data is the foundation for everything that follows.
Without the pixel, there is nothing to attribute, model, or test.
Why It Matters
- Visibility: See exactly how users interact with your site after clicking an ad
- Conversion tracking: Know which campaigns drive actual revenue, not just clicks
- Data foundation: Every advanced measurement method (MTA, MMM, Geo-Tests) relies on accurate event data
How to Set It Up in Adverfly
Step 1: Install the Pixel
Add the Adverfly pixel to your website. See the Pixel SDK documentation for detailed installation instructions.
<script>
!function(){var a=window.adverfly=window.adverfly||[];
/* ... pixel snippet ... */
}();
advPxl("init", YOUR_STORE_ID);
</script>
Step 2: Configure Conversion Events
Track purchases and other key events:
advPxl("conversion", "purchase", {
transaction_id: "ORDER-123",
transaction_gross_revenue: 4900, // in cents
transaction_currency: "EUR",
});
Step 3: Verify Data Flow
- Open Health Center in Adverfly
- Check that events are appearing in real time
- Verify that conversion values match your checkout system
Apps Unlocked at Level 1
Once your pixel is live and data is flowing, these apps become available:
| App | What It Does |
|---|---|
| Summary | Overview dashboard with key performance metrics across all channels |
| Insights | Breakdown analysis with filters, tree data, and custom columns |
| Creatives | Analyze creative asset performance, AI-powered labeling, and previews |
| Journeys | Visualize customer paths across touchpoints and conversion funnels |
| Surveys | Create post-purchase surveys to capture qualitative feedback (e.g., "How did you hear about us?") |
| Health Center | Monitor pixel health, data connections, and integration status |
| Map | Geographic visualization of your marketing performance by region |
| Hourly Breakdown | Analyze performance patterns by hour of day |
| UTM Builder | Build and test tracking URLs with UTM and Adverfly parameters |
| Data Bridge | Sync first-party pixel data back to Meta, Google, TikTok, and Snapchat |
| Integrations | Connect ad platforms, analytics tools, and data sources |
| Labels | AI-powered creative labeling — automatically classify assets by format, angle, tone, and more |
| Automations | Overview of all system schedules, background tasks, and agent automations |
| News | Industry news and trends relevant to your market |
Surveys — A Special Note
Surveys deserve extra attention at Level 1. Post-purchase surveys ("How did you hear about us?") capture self-reported attribution — a qualitative signal that complements your quantitative pixel data. This is especially valuable for channels that are hard to track digitally (podcasts, word-of-mouth, influencers).
Set up surveys early — the data compounds over time and becomes invaluable when you reach Level 3 (MMM).
Checklist Before Moving On
- Pixel fires on every page of your website
- Purchase/conversion events include revenue and transaction ID
- At least 7 days of clean data is flowing
- No duplicate events or missing pages
- UTM parameters are consistent across campaigns
When to Move to Level 2
You are ready for Multi-Touch Attribution when:
- You have 30+ days of continuous pixel data
- You are running campaigns on 2+ channels (e.g., Meta + Google)
- You want to understand how channels work together, not just last-click
Level 2: Multi-Touch Attribution (MTA)
Understand the full customer journey across all touchpoints
What It Is
Multi-Touch Attribution assigns credit to every touchpoint a user interacts with before converting — not just the last click. Instead of giving 100% credit to the final ad, MTA distributes value across the entire journey.
Why It Matters
- Beyond last-click: Discover that your Meta prospecting campaigns initiate journeys, even if Google converts them
- Channel synergies: See which channels assist each other vs. cannibalize
- Smarter budgets: Stop cutting "low-performing" channels that actually drive awareness
Attribution Models in Adverfly
| Model | How It Works | Best For |
|---|---|---|
| Last Click | 100% credit to final touchpoint | Baseline comparison |
| First Click | 100% credit to first touchpoint | Understanding awareness drivers |
| Linear | Equal credit across all touchpoints | Balanced view |
| U-Shaped | 40% first, 40% last, 20% middle | Valuing both discovery and conversion |
| Total Impact | Combines click and impression signals to measure incremental impact | Most comprehensive view |
How to Set It Up in Adverfly
Step 1: Ensure Data Quality
Before enabling MTA, verify in Health Center:
- Events are flowing from all channels
- UTM parameters are correctly configured
- No significant data gaps in the last 30 days
Step 2: Configure Attribution Settings
- Navigate to Attribution Settings in your workspace
- Select your preferred attribution model (start with Linear for a balanced view)
- Set your lookback window (default: 7 days)
- Define which conversion events to attribute
Step 3: Analyze Customer Journeys
- Open Journeys to see multi-touch paths
- Use Insights with attribution breakdown to compare channel performance
- Compare models side-by-side to understand each channel's role
Key Metrics Unlocked
- Attributed Revenue — revenue credited to each channel based on the model
- Assisted Conversions — conversions where a channel appeared but was not last-click
- Path Length — average number of touchpoints before conversion
- Time to Conversion — how long the typical purchase journey takes
Apps Unlocked at Level 2
With attribution data flowing, these additional apps become available or significantly enhanced:
| App | What It Does |
|---|---|
| Custom Attribution | Configure and compare attribution models (Last Click, First Click, Linear, U-Shaped, Total Impact) |
| LTVs | Analyze customer lifetime value segmented by acquisition channel and campaign |
| Customer Segments | Segment customers by behavior, value tier, and acquisition source |
| Correlations | Visualize how creative launches and scaling events correlate with performance shifts |
| Data Exports | Generate API keys to access attributed data programmatically via the REST API |
| Custom Reports | Build custom queries combining attribution data with other dimensions |
LTVs and Customer Segments
These two apps become particularly powerful at Level 2. With attribution data, you can answer questions like:
- "Which channel brings the highest-LTV customers?"
- "Do Meta-acquired customers have different retention than Google-acquired ones?"
- "Which campaigns attract one-time buyers vs. repeat customers?"
This insight directly informs your budget allocation — even before MMM.
Limitations of MTA
MTA is powerful but has blind spots:
- Cookie-dependent: Cannot track users across devices or after cookie deletion
- Digital-only: Does not capture offline channels (TV, radio, OOH)
- Correlation, not causation: Showing in a path does not prove the ad caused the conversion
- Signal loss: iOS privacy changes and ad blockers reduce tracking coverage
These limitations are why you need Level 3 (MMM) and Level 4 (Incrementality Testing).
Checklist Before Moving On
- Attribution model configured and running for 30+ days
- Reviewed journey paths and identified key channel interactions
- Compared at least 2 attribution models to understand channel roles
- Identified questions MTA cannot answer (offline impact, true causality)
When to Move to Level 3
You are ready for Marketing Mix Modeling when:
- You have 6+ months of historical spend data across channels
- You want to understand offline and upper-funnel channel impact
- You need causal ROI estimates, not just correlation-based attribution
- Your monthly ad spend exceeds $10,000+
Level 3: Marketing Mix Modeling (MMM)
Statistical models that reveal the true causal impact of every marketing channel
What It Is
Marketing Mix Modeling is a statistical approach that analyzes the relationship between your marketing spend and business outcomes (revenue, conversions) over time. Unlike MTA, MMM does not rely on user-level tracking — it works with aggregated spend and outcome data.
Adverfly uses Bayesian MMM powered by PyMC Marketing, producing probability distributions instead of single-point estimates. This means every ROI number comes with a confidence range.
Why It Matters
- Privacy-proof: Works without cookies, pixels, or user-level data
- Full-funnel view: Captures offline channels (TV, radio, OOH) alongside digital
- Causal estimates: Separates organic baseline from marketing-driven revenue
- Budget optimization: Recommends optimal spend allocation across channels
- Saturation curves: Shows the point of diminishing returns for each channel
How It Works
- Data input: Historical daily spend per channel + daily revenue/conversions
- Bayesian regression: The model learns how each channel's spend correlates with outcomes, controlling for trends, seasonality, and external factors
- Saturation modeling: Logistic curves capture diminishing returns — each additional dollar of spend yields less incremental revenue
- Adstock/carryover: Accounts for the delayed effect of advertising (e.g., a TV ad today still drives revenue next week)
- Output: Incremental revenue per channel, ROI with credible intervals, optimal budget allocation
How to Set It Up in Adverfly
Step 1: Connect Your Data Sources
- Ensure all ad platforms are connected (Meta, Google, TikTok, etc.)
- Revenue data flows from your pixel or is imported via CSV
- Verify at least 6 months of continuous spend + revenue data
Step 2: Trigger a Model Run
- Navigate to Marketing Mix Modeling in your workspace
- Click Run Model — the model trains on your historical data
- Training takes approximately 15-30 minutes
- You will be notified when results are ready
Step 3: Interpret Results
Key views in the MMM dashboard:
- KPI Overview — Total incremental revenue, blended ROI, model accuracy (R², MAPE)
- Bayesian Time Series — Daily revenue decomposition with 95% and 80% credible intervals
- Efficiency Chart — ROI vs. marginal ROI per channel (with optional MTA prior comparison)
- Saturation Curves — Current spend position on each channel's diminishing-returns curve
- Incremental Waterfall — Daily contribution of each channel to total revenue
- Budget Optimizer — AI-recommended budget reallocation for maximum incremental revenue
Step 4: Optimize Budget
- Review the Budget Optimizer recommendations
- Compare current vs. recommended allocation
- Identify channels that are over-saturated (spend beyond diminishing returns)
- Identify channels with untapped potential (spend below optimal)
Key Metrics Unlocked
- Incremental Revenue — revenue caused by marketing (not organic)
- ROI with Credible Intervals — e.g., "ROI: 3.2x (80% CI: 2.8x – 3.6x)"
- Marginal ROI (mROI) — return on the next dollar spent per channel
- Saturation Point — the spend level where returns flatten
- Adstock Half-Life — how long a channel's advertising effect persists
Apps Unlocked at Level 3
With MMM running, these apps become available or gain new capabilities:
| App | What It Does |
|---|---|
| Marketing Mix Modeling | Full MMM dashboard with Bayesian time series, saturation curves, and budget optimizer |
| AI Forecasting | 12-week revenue forecasts with 80% credible interval bands, powered by MMM model data |
| Marketing Angles | AI-generated marketing angles informed by channel performance and creative effectiveness data |
| Ad Fatigue | Monitor saturation and creative fatigue — know when a channel hits diminishing returns |
| Logbook | Audit trail of model runs, budget changes, and optimization decisions |
| Recommendations | AI-generated optimization suggestions based on MMM outputs — saturation alerts, budget reallocation, creative swaps |
The MMM Ecosystem
At Level 3, the platform shifts from descriptive analytics to prescriptive intelligence. The MMM model acts as a backbone that enriches other apps:
- Insights now shows incremental revenue alongside attributed revenue
- Creatives can rank assets by their contribution to MMM-measured channel performance
- Surveys self-reported attribution data can be compared against MMM-measured channel impact
- Weather & News correlations are automatically controlled for in the model
Limitations of MMM
- Granularity: Works at the channel level, not campaign or ad level
- Historical bias: Model reflects past patterns — major strategy changes may not be captured
- Data requirements: Needs 6+ months of data for reliable results
- Validation: Without experiments, MMM is an estimate — not proof
This is why you need Level 4 (Incrementality Testing) to validate and calibrate MMM results.
Checklist Before Moving On
- First model run completed with acceptable accuracy (R² > 0.85, MAPE < 15%)
- Reviewed saturation curves and identified over/under-invested channels
- Compared MMM ROI with MTA ROI — noted discrepancies
- Identified 1-2 channels where you want experimental proof of impact
When to Move to Level 4
You are ready for Incrementality Testing when:
- Your MMM is running and producing stable results
- You have specific channels where you want experimental proof of ROI
- You are spending in 3+ geographic regions (for geo-lift tests)
- You want to calibrate your MMM with real experimental data
Level 4: Incrementality Testing
Run experiments to prove the true causal impact of your advertising
What It Is
Incrementality testing is the gold standard of marketing measurement. Instead of modeling or attributing, you run a controlled experiment: turn off ads in some regions (or for some users) and compare outcomes against regions where ads continue running.
The difference is your incremental lift — the revenue that would not have happened without advertising.
Why It Matters
- Causal proof: The only method that proves ads caused conversions (not just correlated)
- MMM calibration: Experimental results feed back into your MMM as Bayesian priors, making future models more accurate
- Budget confidence: Know with certainty whether a channel is worth its spend
- Executive trust: Experimental evidence is the most defensible measurement in the boardroom
Types of Incrementality Tests
Geo-Lift Tests (Recommended)
Compare geographic regions with and without advertising:
| Aspect | Details |
|---|---|
| How it works | Pause ads in selected "holdout" regions, keep running in "treatment" regions |
| Duration | Typically 2-4 weeks |
| Granularity | City, state, DMA, or postal code level |
| Privacy | No user-level data required |
| Best for | Proving channel-level impact |
Conversion Lift Tests (Platform-Managed)
Platform-run holdout tests (Meta, Google):
| Aspect | Details |
|---|---|
| How it works | Platform splits users into exposed vs. holdout groups |
| Duration | 1-4 weeks |
| Granularity | User level (managed by platform) |
| Privacy | Platform handles identity |
| Best for | Quick validation of a single platform's impact |
How to Set It Up in Adverfly
Step 1: Find Test Candidates
- Navigate to Geo-Testing in your workspace
- The Candidate Finder automatically analyzes your pixel data to identify regions suitable for testing
- Review suggested treatment-control region pairs (matched on historical performance)
Step 2: Design the Experiment
- Click AI Test Designer to generate a test plan
- Select the channel to test (e.g., "Meta Prospecting")
- Choose treatment and holdout regions from the suggested pairs
- Set test duration (recommended: 3-4 weeks)
- Review the minimum detectable effect — ensure your test has enough statistical power
Step 3: Run the Test
- Pause ads in holdout regions on the ad platform
- Keep all other campaigns and settings unchanged
- Mark the test as "Active" in Adverfly
- Wait for the full test duration — do not stop early
Step 4: Analyze Results
After the test period:
- Adverfly calculates the incremental lift using Bayesian Difference-in-Differences
- Review the posterior distribution of the treatment effect
- Check the credible interval — if it excludes zero, the result is statistically significant
- See the calculated iROAS (incremental return on ad spend)
Step 5: Feed Back to MMM
When a geo-test produces a significant result:
- Adverfly automatically uses the measured lift as a Bayesian prior for that channel in your MMM
- Future model runs will converge toward the experimentally validated truth
- This is called Ground Truth Feedback — closing the loop between experiments and models
Apps Unlocked at Level 4
| App | What It Does |
|---|---|
| Geo Testing | Full geo-lift testing suite with AI Test Designer, Candidate Finder, and result analysis |
| Recommendations | AI-powered budget recommendations now backed by experimental evidence, not just models |
How Experiments Enhance Existing Apps
- Marketing Mix Modeling — experimental results feed back as Bayesian priors, improving future model accuracy
- Insights — channels with geo-test data show higher confidence scores
- AI Forecasting — forecasts calibrated against experimentally validated lift
- Correlations — separate true causal impact from coincidental correlation
Key Metrics Unlocked
- Incremental Lift (%) — percentage of conversions caused by ads
- iROAS — incremental revenue divided by incremental ad spend
- Posterior Probability — confidence that the effect is real (not noise)
- Credible Interval — the range where the true effect most likely falls
Checklist Before Moving On
- Completed at least 1 geo-lift test with a statistically significant result
- iROAS calculated and compared to MMM ROI for the same channel
- Ground Truth Feedback applied to MMM (Bayesian prior updated)
- Identified next channels to test
When to Move to Level 5
You are ready for Unified Measurement when:
- You have MTA running (Level 2)
- You have MMM running with stable results (Level 3)
- You have completed at least 1 incrementality test (Level 4)
- You want a single source of truth that combines all methods
Level 5: Unified Measurement
Combine all measurement methods into a single, triangulated source of truth
Coming Soon — Level 5 describes the vision for unified measurement. The triangulation engine, unified ROI scores, and confidence-weighted blending are currently in development and not yet available in the platform.
What It Is
Unified Measurement is the final level — it combines Multi-Touch Attribution (MTA), Marketing Mix Modeling (MMM), and Incrementality Testing into a single, weighted view of marketing performance. Instead of three conflicting reports, you get one triangulated answer.
Why It Matters
- No more conflicting numbers: MTA says one thing, MMM says another — Unified Measurement reconciles both
- Maximum confidence: Each method's strengths compensate for the others' weaknesses
- Defensible decisions: Budget recommendations backed by multiple measurement approaches
- Continuous improvement: As you run more experiments, the system gets more accurate over time
How Triangulation Works
Adverfly uses confidence-weighted triangulation to blend measurement methods:
Weighting System
| Scenario | Weights | Confidence Score |
|---|---|---|
| Geo-test + MMM + MTA | 50% Experiment, 30% MMM, 20% MTA | 0.95 |
| MMM + MTA (no experiment) | 60% MMM, 40% MTA | 0.70 |
| MTA only (fallback) | 100% MTA | 0.40 |
The weights reflect each method's causal reliability:
- Experiments get the highest weight because they prove causation
- MMM gets more weight than MTA because it controls for confounders
- MTA provides real-time granularity but is correlation-based
Example: Meta Prospecting
| Method | ROI Estimate |
|---|---|
| MTA (Data-Driven) | 2.1x |
| MMM (Bayesian) | 3.4x |
| Geo-Lift Test | 3.1x |
| Unified ROI | 3.06x (confidence: 0.95) |
The unified ROI of 3.06x is calculated as: 0.50 × 3.1 + 0.30 × 3.4 + 0.20 × 2.1 = 3.06
How to Set It Up in Adverfly
Step 1: Verify All Levels Are Active
Ensure you have completed the previous levels:
- Level 1 — Pixel tracking with clean event data
- Level 2 — MTA configured with 30+ days of data
- Level 3 — MMM model running with acceptable accuracy
- Level 4 — At least 1 completed incrementality test
Step 2: Review the Unified Dashboard
- Open the Insights dashboard
- Switch to the Unified attribution view
- Each channel shows its triangulated ROI with a confidence score
- Channels with geo-test data have the highest confidence scores
Step 3: Iterate and Improve
The system improves over time:
- Run more geo-tests — each experiment increases confidence for that channel
- Re-run MMM monthly — models improve with more data and experimental priors
- Monitor MTA quality — keep pixel data clean and UTM parameters consistent
- Test new channels — when you add a new channel, it starts at MTA-only (0.40 confidence) and graduates as you add MMM and experiments
The Measurement Flywheel
Unified Measurement creates a virtuous cycle:
- MTA provides real-time signal → identifies channels worth investigating
- MMM quantifies long-term impact → recommends budget shifts
- Experiments prove causation → calibrate MMM with ground truth
- Calibrated MMM produces better estimates → identifies next experiment
- Repeat — each cycle increases confidence across all channels
Apps at Full Power (Level 5)
At Level 5, every app in the platform operates with maximum intelligence:
| App | Enhancement |
|---|---|
| AI Chat | Ask questions across all measurement methods — "What is the true ROI of Meta?" returns a triangulated answer with confidence score |
| Insights | Unified ROI column with confidence indicators, method agreement badges |
| Summary | Executive dashboard showing triangulated performance, not just last-click |
| Automations | Budget rules based on unified ROI thresholds with confidence gates |
| Recommendations | Recommendations weighted by measurement confidence — experimental evidence ranks highest |
| Loyalty Program | Tie loyalty program effectiveness to incrementally measured customer acquisition |
| Data Exports | Export unified measurement data with confidence scores for external BI tools |
| Custom Reports | Query across MTA, MMM, and experiment data in a single report |
Everything Connected
At Level 5, the platform is no longer a collection of separate tools — it is a unified measurement system where:
- Surveys validate what channels customers remember → compared against MMM and experiments
- Creatives performance is measured by true incremental impact, not just clicks
- LTVs reflect causally acquired customers, not just attributed ones
- Forecasts are grounded in experimental truth, not just historical patterns
Key Metrics at Level 5
- Unified ROI — triangulated return across all methods
- Confidence Score — 0 to 1, how trustworthy the unified ROI is
- Method Agreement — whether MTA, MMM, and experiments align (or conflict)
- Coverage — percentage of spend covered by experimental validation
Ongoing Best Practices
- Run at least 1 geo-test per quarter to keep experimental data fresh
- Re-train MMM monthly to incorporate new data and priors
- Review method agreement — if MTA and MMM diverge significantly, prioritize testing that channel
- Keep pixel health high — MTA quality degrades if tracking breaks
- Test new channels early — the sooner you experiment, the sooner you reach high confidence