Building Ad Dashboards That Load in Under One Second

Our advertisers see real-time campaign performance in 800ms. Their previous platform took 12 seconds. This is how.

By Anokuro Engineering··Ad Tech

We lost a $240K/year advertiser in Q3 2025 because their campaign manager said our dashboard was "too slow." The dashboard loaded in 4.2 seconds. That is objectively fast for ad-tech. It was not fast enough.

We rebuilt the entire dashboard stack. Current median load time is 800ms from navigation to fully interactive charts with live campaign data. The advertiser came back. Three of their competitors followed.

This is the architecture that makes 800ms possible when you are aggregating billions of ad events across time, geography, creative variant, and device type.

The Problem Is Bigger Than You Think

A typical advertiser dashboard query looks like this: show me impressions, clicks, CTR, CPM, and spend for campaign X, broken down by hour, across 6 countries, 14 creative variants, and 4 device categories, for the last 30 days. That is 6 x 14 x 4 x 720 = 241,920 data points per metric, times 5 metrics, totaling roughly 1.2 million values. And the advertiser expects the chart to appear instantly when they click a tab.

Most ad platforms handle this by running a SQL query against a data warehouse. BigQuery, Redshift, Athena. The query takes 3-8 seconds depending on cluster load. Then the frontend receives a multi-megabyte JSON payload, parses it, transforms it into chart-friendly structures, and renders it. Total: 8-15 seconds. Advertisers alt-tab to something else. Engagement drops. They stop checking performance. Campaigns underperform because nobody is watching.

Dashboard load time is not a vanity metric. We measured the correlation across 340 advertiser accounts over 6 months: every additional second of dashboard latency reduces daily active dashboard sessions by 11%. Advertisers who check their dashboard fewer than 3 times per day have a 2.4x higher churn rate than those who check 8 or more times. Fast dashboards retain advertisers.

Backend: Pre-Aggregated Materialized Views

We do not run ad-hoc aggregation queries at dashboard load time. Ever.

AnokuroDB maintains materialized views that pre-aggregate campaign metrics at multiple granularities. When an impression event arrives (and we ingest 420,000 per second at peak), it updates 4 materialized views simultaneously:

  • Per-minute rollup by campaign, country, creative, device
  • Per-hour rollup (same dimensions)
  • Per-day rollup (same dimensions)
  • Per-campaign lifetime rollup (collapsed dimensions for summary cards)

The critical insight: these updates are incremental. When an impression arrives from Vietnam on creative variant c-0847 on an Android device, we increment exactly 4 counters in 4 pre-sorted structures. We do not recompute the entire view. The incremental update takes 0.003ms. A full recomputation of a 30-day campaign view takes 340ms. We never do the latter during a dashboard request.

The materialized views live in AnokuroDB's memtable when they are recent (last 6 hours) and in time-bucketed SSTables when they are older. A dashboard request for 30 days of hourly data reads from 1 memtable and at most 5 SSTables. The I/O pattern is sequential. P99 backend latency for a full dashboard payload: 120ms.

The Wire Format

We do not send JSON from the backend to the dashboard. JSON parsing of a 1.2 million value payload takes 180ms in V8. That is 22% of our entire budget gone on deserialization alone.

We use a custom binary format. Metric values are packed as Float32Arrays, timestamps as Uint32Arrays (epoch seconds), and dimension keys as varint-encoded indices into a shared dictionary. The entire 30-day, 5-metric dashboard payload compresses to 380KB with Brotli. The browser receives an ArrayBuffer, wraps it in typed array views with zero parsing cost, and hands it directly to the rendering layer.

Deserialization time: 2ms. Down from 180ms. This was the single highest-leverage optimization in the entire project.

Frontend: TanStack Start, TanStack Query, and Predictive Prefetching

The dashboard is a TanStack Start application. We chose it over Next.js for one reason: TanStack Start gives us full control over data loading without opinionated server component abstractions that fight our streaming architecture.

TanStack Query handles all data fetching with aggressive caching. Every dashboard view has a query key that encodes campaign ID, date range, granularity, and dimension filters. When a user views the "Overview" tab, we prefetch the data for "By Country," "By Creative," and "By Device" tabs in the background. Navigating between tabs is instant because the data is already in the query cache.

The prefetching is not random. We built a simple Markov model from 3 months of navigation logs. After viewing "Overview," 72% of users click "By Creative" next. 18% click "By Country." We prefetch in probability order. The top-2 predicted tabs are prefetched at full priority. The rest are prefetched at idle priority using requestIdleCallback.

Cache invalidation is event-driven. The dashboard maintains a WebSocket connection to a lightweight Gleam service that publishes cache-bust signals when materialized views update. When new data arrives, we invalidate specific query keys and refetch in the background. The user sees a subtle shimmer animation on the affected chart segment for 200ms while fresh data loads. They never see a loading spinner after the initial load.

Rendering 50,000 Data Points Without Jank

Recharts with SVG rendering falls over at about 5,000 data points. The DOM node count explodes, React's reconciliation grinds, and you get visible frame drops during pan and zoom interactions.

We render all time-series charts on a single Canvas element. The chart library is a custom fork of a lightweight canvas charting engine. Each data series is a typed array that we pass directly to the rendering loop without transformation. The rendering pipeline:

  1. Viewport culling: only data points visible in the current zoom window are drawn. A 30-day chart zoomed to a 6-hour window draws 360 points instead of 21,600.
  2. Level-of-detail: at full zoom-out, we render the per-hour rollup (720 points). As the user zooms in past a threshold, we swap to per-minute rollup and fetch it from the cache or the backend.
  3. Off-main-thread aggregation: when the user is zoomed out and we need to downsample 50,000 per-minute data points to 720 per-hour points for display, we do the aggregation in a Web Worker. The main thread stays free for input handling.
  4. GPU compositing: the canvas layers are composited by the GPU. Pan and zoom are CSS transforms applied to the canvas element, not redraws. We only redraw when the viewport change exceeds a threshold that would make the level-of-detail visibly wrong.

The result: 60fps pan and zoom across 50,000 data points on a 2020 MacBook Air. On the M1 iPad that several of our advertisers use on the go, we maintain 120fps.

The Summary Card Trick

The first thing an advertiser sees when they open a campaign dashboard is 4 summary cards: total spend, total impressions, average CTR, and average CPM. These numbers must appear before anything else.

We serve them from a dedicated endpoint that reads only from the per-campaign lifetime rollup. This endpoint returns 4 numbers in a 32-byte binary response. It resolves in 15ms at P99. The summary cards render in the first paint, 200ms before the time-series charts appear.

This is a perception hack. The user sees meaningful data almost instantly. The charts fill in 600ms later. Psychologically, the dashboard feels instant because the most important information is already there when the charts are still loading.

The Numbers

We measure dashboard performance across all 340 active advertiser accounts with Real User Monitoring:

  • Time to first meaningful paint (summary cards): P50 220ms, P95 410ms
  • Time to full interactive (all charts rendered and responsive): P50 800ms, P95 1,400ms
  • Tab switch latency (cached): P50 45ms, P95 120ms
  • Tab switch latency (cache miss): P50 380ms, P95 650ms
  • Chart zoom/pan frame rate: P50 60fps, P5 48fps

The previous version: P50 time to interactive was 4,200ms. P95 was 11,800ms. Tab switches triggered full-page loading spinners.

What We Learned

Dashboard performance is a retention lever. Advertisers who interact with their dashboards more frequently make better optimization decisions, which improves campaign performance, which increases spend, which increases our revenue. The causal chain from "dashboard loads in 800ms instead of 12 seconds" to "higher revenue per advertiser" is direct and measurable.

We spent 4 engineering-months on this rebuild. The revenue impact was visible within 6 weeks. If your ad platform has a slow dashboard, you are losing advertisers and you may not know why, because they will never tell you "I churned because your charts took too long." They will just stop logging in.

Copyright © 2026 Anokuro Pvt. Ltd. Singapore. All rights reserved.