Attention Metrics Will Replace Impressions and Clicks. Here Is Why.
An impression that nobody sees is worthless. A click from a bot is worse than worthless. Attention is the only honest metric.
The ad industry's measurement foundation is rotten.
An "impression" means a pixel was theoretically loaded somewhere on a page. The IAB standard for a viewable display impression is 50% of the ad's pixels visible for 1 continuous second. Think about what that means: an ad can be half off-screen, visible for barely one second, and it counts as a viewable impression. The advertiser pays full price. This is the standard the entire industry bills on.
Clicks are worse. On mobile, fat-finger accidental clicks account for an estimated 30-40% of all ad clicks depending on format and placement. Bot clicks remain a multi-billion dollar fraud vector despite years of "brand safety" investment. And even a legitimate click has a tenuous relationship with purchase intent. Someone who clicks an ad and bounces in 2 seconds was not interested. The advertiser still paid for the click.
We are replacing both metrics with attention measurement. Not as an experiment. As our primary billing and optimization signal. This is how we built it and why it changes everything.
What Attention Actually Means
Attention is not a single number. It is a composite score derived from observable user behavior signals, all measured client-side:
Active viewport time: how long the ad is visible in the viewport while the user is actively interacting with the page. Not just visible. Active. If the user switches tabs, scrolls the ad out of view, or stops interacting with the page (no scroll, touch, or mouse movement for 3 seconds), the attention clock stops. This is fundamentally different from the IAB's continuous visibility timer, which keeps counting even if the user has walked away from their phone.
Scroll velocity: a user scrolling at 4,000 pixels per second past a feed is not paying attention to any individual item. A user who decelerates from rapid scrolling and pauses near an ad is exhibiting attention behavior. We measure scroll velocity in the viewport region containing the ad and weight the attention score accordingly.
Interaction depth: did the user interact with the ad? Not click-through, but in-unit interaction. Did they watch more than 50% of a video? Did they swipe through a carousel? Did they expand a collapsed creative? Each interaction type has a calibrated weight based on its measured correlation with downstream conversion.
Dwell time context: 10 seconds of attention on a 15-second video ad is excellent. 10 seconds of attention on a static banner is unusual and likely indicates the user is distracted or the page is frozen. We normalize attention duration by creative type and expected engagement time.
The raw signals are combined into a single attention score between 0 and 100. The model weights were calibrated against a holdout dataset of 14 million impressions where we had both attention signals and downstream conversion data. An attention score above 60 correlates with a 4.2x higher conversion probability than a viewable impression with an attention score below 20.
The SDK: 2.8KB of Carefully Written JavaScript
The attention measurement SDK must run on every device we serve ads to, including Android Go devices with 1GB of RAM and 2015-era MediaTek processors. It must not cause jank. It must not measurably increase page load time. It must not drain battery.
The SDK is 2.8KB gzipped. It is hand-written JavaScript, not compiled from TypeScript, because we need precise control over what the minifier produces. Every byte matters when this code runs on billions of impressions per month.
The measurement loop runs on requestAnimationFrame when the ad is in the viewport, and suspends entirely when it is not. On low-end devices (detected via navigator.hardwareConcurrency and navigator.deviceMemory), the measurement frequency drops from every frame to every 5th frame. The accuracy reduction is less than 3%, which is well within our billing tolerance.
The signals collected:
viewport_enter_ts, viewport_exit_ts,
active_start_ts, active_end_ts,
scroll_velocities[], // sampled at 200ms intervals
interaction_events[], // type + timestamp only
page_visibility_changes[]
These signals never leave the device as raw events. The SDK computes the attention score locally using a lightweight scoring function (47 lines of JavaScript, no dependencies, no matrix math, just weighted arithmetic). Only the final score and a few aggregate statistics are transmitted to our servers:
{
attention_score: 73,
active_viewport_ms: 8400,
interaction_count: 2,
creative_completion_pct: 0.85,
device_tier: 2
}
This payload is 94 bytes. It is sent once when the ad exits the viewport or the page unloads, using navigator.sendBeacon for reliable delivery. There are no per-event server calls. No streaming telemetry. No WebSocket connections. One beacon per impression.
Privacy by Architecture
We made a deliberate architectural choice: the attention SDK computes everything on-device and transmits only aggregate scores. This is not a privacy compromise; it is a privacy advantage.
We never receive scroll patterns, interaction timings, or behavioral sequences that could be used to fingerprint users. The attention score is a single number that describes how much attention an ad received. It says nothing about the user. It cannot be reversed to reconstruct browsing behavior. It cannot be correlated across sites to build a profile.
This matters enormously in Southeast Asia where privacy regulations vary by country and change frequently. Because our attention data is not personal data under any reasonable interpretation of any SEA privacy framework, we avoid the entire consent and data retention compliance surface. The score is attached to the impression event, not to a user profile. It is advertising performance data, not behavioral data.
When we explain this to advertisers, the ones who understand data privacy get it immediately. The ones who are used to receiving per-user behavioral data from other platforms are initially confused. Then they realize they never used that per-user data anyway. They used it to compute aggregate performance metrics. We just compute the aggregates client-side and skip the privacy-toxic intermediate step.
Billing on Attention
This is where it gets contentious. We offer attention-based billing as an alternative to CPM and CPC.
The model is straightforward: the advertiser sets a maximum cost per attentive impression (CPAi). An attentive impression is one where the attention score exceeds a threshold agreed upon at campaign setup (typically 40-60, calibrated per vertical). Impressions below the threshold are free. The advertiser pays only for attention they actually received.
The pricing math works out favorably for both sides. Our average CPM on attention-billed campaigns is 15-25% higher than standard CPM campaigns. But the advertiser's cost per conversion is 30-45% lower because they are not paying for wasted impressions. We make more revenue per impression served. They get more value per dollar spent. The waste in the middle disappears.
Early results from 23 campaigns across 8 advertisers running on attention billing since Q4 2025:
- Average attention score on attention-billed campaigns: 58 (vs 34 on standard CPM campaigns). This is because our optimizer allocates attention-billed budget toward high-attention placements.
- Cost per conversion: 38% lower than equivalent CPM campaigns from the same advertisers in the same verticals.
- Advertiser satisfaction (measured by campaign renewal rate): 91% vs 67% for standard CPM.
- Our revenue per 1000 impressions served: $3.40 on attention campaigns vs $2.80 on CPM campaigns.
The optimizer learns fast. Within 48 hours of campaign launch, the attention-based bidding model has enough signal to distinguish high-attention placements from low-attention placements. It preferentially bids on inventory where historical attention scores are high. This creates a virtuous cycle: attention-billed campaigns concentrate on high-quality inventory, which improves attention metrics further, which improves conversion rates, which makes the advertiser happy, which increases spend.
Why the Industry Resists This
The ad-tech industry has a structural incentive to keep billing on impressions. Platforms that sell low-quality impressions at scale benefit from a measurement standard that counts everything. If you make money selling 10 billion impressions per month and 60% of them receive near-zero attention, switching to attention-based measurement erases 60% of your billable inventory overnight.
We do not have this problem because we are small and our inventory is curated. We serve ads in placements we control or in placements where we have attention data from previous campaigns. We can afford to guarantee attention because we know which placements deliver it.
Large platforms will adopt attention metrics eventually. Not because they want to, but because premium advertisers will demand it. Brand advertisers spending $10M+ per year increasingly have internal attribution models that show the disconnect between impression volume and business outcomes. When they can buy verified attention from one platform and unverified impressions from another, the budget shifts.
We are building for that future now. When attention becomes the standard, we will have years of calibration data, a proven billing model, and an SDK that runs on a billion devices. The platforms that scramble to retrofit attention measurement onto impression-based architectures will spend years catching up.
The only honest measurement of advertising is whether anyone actually saw it. That is what attention metrics provide. Everything else is a convenient fiction that the industry tells itself so it can keep billing for nothing.