TanStack Start in Production: The Good, the Fast, and the Opinionated
We bet our marketing site and internal dashboards on TanStack Start. Here is our honest report after three months.
Three months ago, we chose TanStack Start for two projects: our public marketing site (anokuro.com) and an internal dashboard that serves real-time ad performance data to our operations team. We evaluated Next.js, Remix, SvelteKit, and Astro. We chose TanStack Start. This is our honest assessment of that decision.
Why TanStack Start
We had four non-negotiable requirements:
File-based routing without magic. Next.js's App Router introduced server components, server actions, "use client" directives, and a caching layer that no one on our team fully understood. We spent more time reasoning about Next.js's behavior than building features. Remix was better but its loader/action pattern felt like a step backward from hooks. TanStack Start's createFileRoute is explicit -- you declare your route, your loader, and your component. Nothing is implicit. Nothing runs unless you ask for it.
First-class TypeScript. Not "TypeScript supported." First-class. TanStack Start's route params, search params, loader data, and context are fully typed end-to-end. When we rename a route parameter, TypeScript errors appear in every component that references it. When we change a loader's return type, every consuming component updates or the build fails. This is not decoration -- it is structural integrity for a codebase with 14 routes and growing.
No vendor lock-in. Next.js wants you on Vercel. Remix was acquired by Shopify. We deploy to bare metal servers in Singapore because latency to Southeast Asian exchanges matters. TanStack Start deploys anywhere that runs Node.js (or Bun, which we use). Our deployment is a Docker container with a Bun server. No edge functions, no serverless cold starts, no platform-specific APIs.
TanStack Query integration. We were already using TanStack Query (React Query) for data fetching in our internal tools. TanStack Start's integration is seamless -- loaders populate the query cache during SSR, and the client picks up exactly where the server left off. No double-fetching. No hydration mismatches. The mental model is one query cache that starts on the server and continues on the client.
Performance: The Numbers
Our marketing site scores:
- Lighthouse Performance: 99 (mobile), 100 (desktop)
- First Contentful Paint: 0.8s (mobile on 4G throttling)
- Time to Interactive: 1.1s (mobile)
- Total Blocking Time: 12ms
- CLS: 0
These numbers come from route-level code splitting and aggressive prefetching. Every route is a separate chunk. When a user hovers over a navigation link, TanStack Start prefetches the route's component and fires its loader. By the time they click, the page is already loaded. The perceived navigation time is effectively zero.
Our JavaScript bundle for the marketing site index route is 47KB gzipped. The largest route (our technical blog with syntax highlighting) is 62KB. For comparison, the Next.js prototype of the same site was 89KB for the index route -- nearly double -- because of the React Server Components runtime, the router, and the caching layer.
The internal dashboard is a different beast. It renders real-time charts updating every 2 seconds across 8 panels, each backed by a separate TanStack Query subscription. Initial load is heavier (128KB gzipped) but subsequent interactions are instant because every data update flows through the query cache without re-mounting components.
Developer Experience: The Good
createFileRoute is the right abstraction. Here is a real route from our dashboard:
export const Route = createFileRoute('/campaigns/$campaignId')({
params: {
parse: (params) => ({
campaignId: z.string().uuid().parse(params.campaignId),
}),
stringify: (params) => ({
campaignId: params.campaignId,
}),
},
loader: async ({ params, context }) => {
const campaign = await context.api.getCampaign(params.campaignId);
if (!campaign) throw notFound();
return { campaign };
},
component: CampaignDetail,
});
The route param is validated with Zod at the boundary. The loader fetches data with full type inference. The component receives typed loader data. If getCampaign changes its return type, TypeScript catches it here. If someone navigates to /campaigns/not-a-uuid, the param validation rejects it before the loader runs. Every boundary is typed. Every error is caught at compile time.
Search params as state. TanStack Start treats URL search params as typed, validated state. Our dashboard's campaign list has filters for date range, status, and ad network:
export const Route = createFileRoute('/campaigns/')({
validateSearch: z.object({
from: z.string().date().optional(),
to: z.string().date().optional(),
status: z.enum(['active', 'paused', 'ended']).optional(),
network: z.string().optional(),
}),
});
Filters are URL state, not React state. Users can bookmark filtered views, share links with colleagues, and hit back/forward to navigate filter history. We did not write a single line of URL serialization code. TanStack Start handles it with type safety.
Devtools are exceptional. TanStack's devtools panel shows every route match, every loader execution, every query cache entry, and every pending navigation. When a dashboard panel shows stale data, we open devtools and see exactly which query is stale, when it was last fetched, and what triggered the refetch. This has cut our debugging time for data freshness issues by roughly 70%.
Developer Experience: The Pain Points
We are not here to sell TanStack Start. Here is what hurt.
SSR edge cases. Our marketing site has an animated SVG on the hero section. During SSR, the animation library (framer-motion) rendered the initial frame, but the client hydration started from a different animation state, causing a visible flash. TanStack Start does not have built-in Suspense-based streaming for deferred content the way Next.js does (or at least, the API was not obvious). We solved it with a useIsClient hook that delays the animation until hydration completes. It works, but it felt like a workaround.
Another SSR issue: our blog uses syntax highlighting with shiki, which is async. TanStack Start's loader runs the highlighting server-side, but we had to serialize the highlighted HTML as a string in the loader return value rather than rendering it as a React component. The serialization boundary between server and client is not as seamless as Next.js's RSC model for this specific use case.
The ecosystem is young. When we hit a routing edge case involving nested layouts with parallel loaders, the answer was not on Stack Overflow. It was in a GitHub Discussion with 3 replies, one of which was from Tanner Linsley himself. The documentation is good but not comprehensive. We read TanStack Start's source code at least once a week to understand behavior. For a team that values self-sufficiency, this is manageable. For a team that relies on tutorials and blog posts, it would be painful.
Middleware story is incomplete. We need authentication middleware that runs before every loader on protected routes. TanStack Start's middleware support exists but the patterns were not well-documented when we started. We implemented auth as a route context provider that throws a redirect to /login if the session is invalid. It works, but it is more boilerplate than Remix's nested route loaders or Next.js's middleware.ts.
Build configuration. TanStack Start uses Vinxi under the hood, which uses Nitro, which uses Rollup or Vite depending on the context. When we needed to customize the Vite config for our Bun deployment, the layering was confusing. We needed to figure out which config file affected which part of the build. The defaults are sensible, but when you need to deviate, the abstraction layers make it harder than it should be.
The Internal Dashboard: Real-Time Ad Data at Scale
Our internal dashboard serves 200+ users across engineering, operations, and sales. It displays real-time campaign performance: impressions, clicks, spend, and bid win rates, updated every 2 seconds.
The architecture:
- TanStack Start serves the application with SSR for initial load
- TanStack Query manages 24 active queries per dashboard view, each polling at 2-second intervals
- WebSocket connection for critical alerts (budget exhaustion, error rate spikes) that need sub-second delivery
- Bun server handling SSR and API proxying to our internal services
The query configuration is aggressive:
const campaignMetrics = useQuery({
queryKey: ['campaign-metrics', campaignId, timeRange],
queryFn: () => api.getCampaignMetrics(campaignId, timeRange),
refetchInterval: 2000,
staleTime: 1500,
gcTime: 30000,
placeholderData: keepPreviousData,
});
staleTime of 1.5 seconds with a 2-second refetch means there is a 500ms window where data is considered stale before the next fetch. keepPreviousData prevents layout shifts during refetches. The result is charts that update smoothly without flickering or blank states.
Under load testing with 200 concurrent users each viewing different campaigns, the Bun server handles SSR at p99 of 45ms. The client-side query layer generates approximately 2,400 API requests per second (200 users x 12 active queries each polling at 2s intervals). Our API gateway handles this comfortably.
The Verdict
TanStack Start is the framework we wanted Next.js to be: fast, typed, explicit, and deployable anywhere. Its rough edges are real -- SSR limitations, young ecosystem, middleware gaps -- but they are the rough edges of a framework that is two years old, not fundamental design flaws.
We will stay on TanStack Start. We will contribute upstream when we hit gaps. And we will continue to bet on the principle that the best frameworks are the ones that get out of your way and let TypeScript do the work.