PREDICTION ENGINE

What caused the ROAS lift. Not what correlates with it.

Pre-production kill signals. Per-feature causal attribution. A reconciliation ledger where every prediction is locked before launch — and every miss is published. Defensible, CFO-proof numbers, before you commit a dollar of production spend.

RECONCILED PREDICTIONS

22,396

Since Jan 2025 · timestamped · no revision history

HIT RATE · TOP-QUARTILE

52%

2.1× random baseline · misses published alongside hits

MEAN BRIER SCORE

0.18

Lower is better · random baseline is 0.25

SUPER BOWL LX · PRE-GAME

8 / 10

Directional calls correct · zero conversion data · concept-only eval

LEDGER DEPTH

16 mo

Weekly reconciliation · 100% coverage · compounds with every campaign

THE PROBLEM

Your ROAS dashboard tells you what happened. Your A/B tests measure stated preference, not revealed behavior. Your attribution model is correlation dressed up as causation. The result: creative decisions get defended with numbers that don't survive contact with a CFO. The prediction engine replaces all of that with predictions locked before the spend, and reconciled against outcome after it.

THE DIFFERENCE, SIDE BY SIDE

Two ways to read the same market. Only one tells you what happens next.

MMM, A/B tests, and ROAS dashboards explain what already happened. Chorus predicts what will — at the feature level, before the spend — and grades itself against every outcome.

01 · PRE-PRODUCTION KILL SIGNAL

Kill losers before filming.

Competitor already owns this territory. This hook is saturating. This angle alienates your skeptic segment. Surfaced before a frame is shot — not inferred from a post-hoc ROAS report after the budget is already gone.

Ask: "Kill-check four Super Bowl concepts before we commit production budget."

Get: Kill signal with named reasons, saturated hooks, and the specific segment each concept will alienate.

Highest value score in the study · lowest variance across personas

02 · CAUSAL ATTRIBUTION

Prove what caused ROAS. Not what correlates with it.

Per-feature Shapley attribution with confidence bands. The conversation with your CFO moves from "these ads ran and ROAS went up" to "this specific creative feature caused a traceable lift — controlling for spend, seasonality, and every other campaign running at the same time."

Ask: "What caused our Q1 ROAS lift — at the creative-feature level?"

Get: Shapley-ranked creative features, confidence bands, and what to scale, kill, or reframe.

#1 purchase trigger in the study · defensible, CFO-proof numbers

03 · BRIEF SIMULATION AT SCALE

Test 900 briefs as text. Produce the five that survive.

Zero production cost. Three independent models score every brief — where they disagree by 20+ points, the disagreement is the finding. An adversarial layer argues why your best concept will fail before production commit. Brief order affects ROAS by ±14.8% — sequential exposure modeled before the media plan locks.

Ask: "Generate 900 brief variants for our Q1 launch."

Get: Ranked briefs, predicted ROAS bands, segment response distribution, kill reasons per variant.

0.56 purchase trigger · scale manual creative cannot physically reach

04 · CAPITULATION RATE

The number your paid team has never seen.

The percentage of your active paid creative currently suppressing your own ROAS — at creative-feature resolution. Most brands cannot answer this question. The prediction engine can, and flags the specific ads to kill today.

Ask: "What's my Capitulation Rate this week — and which ads are dragging it?"

Get: % of active spend suppressing ROAS, named creatives to kill, estimated lift if killed.

Buyers become champions once they see their own number

05 · THE PUBLISHED LEDGER

Every prediction locked before launch. Every miss published.

22,396 predictions logged to a reconciliation ledger and matched against realized outcome. Weekly retrain consumes the residuals. The model compounds with every campaign — and can't be replicated by a competitor starting fresh.

Ask: "Show me last cycle's scale-picks vs. what actually landed."

Get: 52% of Chorus scale-picks landed top-quartile — 2.1× random baseline. Misses included, not hidden.

Compounding moat · timestamped, Brier-scored, reconciled weekly

TEST-SET HOLDOUT ACCURACY

Test-set accuracy. Four categories. Not cherry-picked wins.

Held-out validation on production brand data across four categories. Timestamped predictions matched to realized outcomes. Misses published alongside hits.

WHAT A CFO ASKS

Five questions this engine has to survive before it earns ad budget.

Finance doesn't care about dashboards. It cares about accountability. Here's what Chorus answers — on the record, in writing, every time.

01 · METHODOLOGY

How do you know your predictions are actually better than our current attribution?

Apples-to-apples benchmark on the same out-of-sample ad set. Brier score, log-loss, and top-decile hit rate against last-touch, MMM, and stated-preference A/B baselines. Chorus improves all three by 38–61%. We publish the test set; you can re-run it on your account during onboarding.

02 · MISS HANDLING

What happens when a prediction is wrong?

Every miss is logged with its named cause — distribution drift, novel creative format, unseen persona interaction. The adversarial model trains explicitly on misses. The 2.1% of predictions our ensemble agrees are "low confidence" are flagged in the UI before spend — so you know when not to trust the system, not just when to trust it.

03 · EXPLAINABILITY

Why should a CFO trust a black box with budget decisions?

It isn't a black box. Every prediction ships with its top-seven Shapley drivers, the 95% confidence band, a similar-brief comparable set, and the historical hit rate on that specific creative family. The underlying models are published in our methodology paper — no proprietary magic, just disciplined ensembles, DoubleML confounding adjustment, and a reconciliation ledger you can audit line by line.

04 · DATA BOUNDARIES

Whose data trains the model — and whose data can influence my predictions?

The base ensemble is trained on a public creative corpus and anonymized opt-in benchmarks. Your account data — spend, creative assets, outcome metrics — is used exclusively to calibrate your private instance. Multi-tenant firewall. No cross-customer leakage. SOC 2 Type II. DPA available before you share a single asset.

05 · SWITCHING COST

What's the exit cost if this doesn't work for us?

Read-only ad-account integration — no rewiring, no pixel swap, no tracking changes. Cancel any time. Export the full reconciliation ledger (CSV + JSONL) on the way out. Your historical predictions and their reconciled outcomes are yours. If we can't beat your incumbent stack on the first 30-day bake-off, we tell you.

GET STARTED

Hook up one ad account. Get your first predictions this week.

Four steps to a live prediction loop on your own account. Invite code access during the explorer program — use EXPLORE2026.

STEP 1

Create your Adology account.

Sign up using invite code EXPLORE2026. Onboarding takes about two minutes.

Sign up →

STEP 2

Authorize your ad account.

Connect Meta, TikTok, or Google. Read-only. The model needs spend + outcome data to calibrate against your actual ROAS.

See integrations →

STEP 3

Run your first kill-check.

Paste four briefs or concepts. Get back kill signals, saturated hooks, and predicted ROAS bands before a frame is shot.

STEP 4

Let it compound.

Every prediction is locked before launch and reconciled against realized outcome. The model sharpens on your brand every week you use it.

Browse all skills →