Bun in Production: One Year Later
We switched from Node to Bun for all our TypeScript services twelve months ago. Here is what broke and what got faster.
In February 2025, we migrated all 14 of our TypeScript services from Node.js to Bun. Not a trial. Not a gradual rollout. We cut over everything in three engineer-weeks because half-migrations are worse than no migration at all. Twelve months later, we are never going back.
This is not a Bun advocacy post. This is a field report. Here is what broke, what got faster, and what we learned about running Bun in a production ad-serving infrastructure processing 2.3 million requests per second across Southeast Asia.
The Migration
Three engineers. Three weeks. Fourteen services. The work broke down like this:
Week 1: Discovery and triage. We ran every service's test suite under Bun and catalogued failures. Out of 14 services, 9 passed all tests immediately. The remaining 5 had issues falling into three categories:
- Incompatible Node APIs (3 services): Usage of
node:vmwith specific options Bun did not support,node:diagnostics_channelfor our custom tracing, andnode:worker_threadswithtransferListsemantics that differed from Node's implementation. - Broken npm packages (4 services): Packages with native addons compiled against Node's N-API that needed recompilation, or packages that checked
process.versions.nodeand refused to run. - Subtle behavior differences (2 services):
setTimeoutordering edge cases andBuffer.from()with unusual encodings.
Week 2: Fixes. We replaced node:diagnostics_channel with a direct OpenTelemetry integration (which was better anyway). We patched the node:vm usage by switching to Bun's FFI for our sandboxed ad-script evaluation. We forked two npm packages, submitted upstream PRs, and pointed our lockfile at the forks. The worker_threads issue was fixed by Bun's team in a nightly build after we filed the issue, which shipped in a stable release before our migration completed.
Week 3: Staged rollout. We deployed each service to a canary environment running Bun alongside the Node production deployment. Traffic was split 10/90, then 50/50, then 100/0. We monitored error rates, latency percentiles, memory usage, and CPU utilization at each stage. Two services required an additional day of debugging for edge cases that only appeared under production load.
Three engineer-weeks. Not three engineer-months. The migration was far less painful than we expected, and we attribute this to Bun's genuinely good Node.js compatibility. The 9-out-of-14 services that passed immediately represent 94,000 lines of TypeScript that ran without modification.
Performance Gains
Here are the numbers after twelve months of production data, averaged across all 14 services:
| Metric | Node.js 20 | Bun 1.2 | Improvement | |--------|-----------|---------|-------------| | Cold start | 1.84s | 0.77s | 2.4x faster | | HTTP throughput (req/s) | 47,200 | 80,240 | 1.7x higher | | Memory (RSS, steady state) | 312MB | 187MB | 40% less | | P99 latency | 12.3ms | 7.1ms | 42% lower | | Package install (CI) | 34s | 8s | 4.3x faster |
The cold start improvement matters because we auto-scale aggressively. Our ad-serving traffic follows daily patterns in each timezone: morning commute, lunch, evening, late night. Services scale from 3 to 40 instances throughout the day. With Node, a cold instance took 1.84 seconds to start serving traffic. With Bun, 0.77 seconds. During traffic spikes, this difference determines whether we serve ads or return no-bids while waiting for capacity.
The memory reduction was the surprise winner. At 40% less memory per instance, we either run more instances per host or run the same number with more headroom for traffic bursts. We chose the latter, which reduced our OOM kill rate from ~2 per week to zero in twelve months.
The bun install speed in CI compounds across every pull request. At 140 PRs per week, saving 26 seconds per install adds up to over an hour of CI time saved weekly on package installation alone.
Bun-Specific Features We Adopted
We did not just swap runtimes. We adopted Bun-specific features that have no Node equivalent.
Bun.serve(): Bun's built-in HTTP server replaced Express in 6 of our services and Fastify in 3. Bun.serve() is not a framework. It is a single function that takes a fetch handler and returns a server. No middleware chain, no plugin system, no router. We handle routing with a pattern-matching function that compiles route patterns to a trie at startup. The result is an HTTP server with zero dependencies that handles 80,000 requests per second on a single core.
Bun.serve({
port: 3000,
fetch(req) {
const route = router.match(req.method, new URL(req.url).pathname)
if (!route) return new Response("Not Found", { status: 404 })
return route.handler(req, route.params)
},
})
We still use Hono for services that genuinely need middleware (authentication, CORS, request logging), but the majority of our internal services are simple request/response handlers that benefit from the minimal overhead.
Built-in SQLite: Three of our services use SQLite for local caching and configuration. Previously, we used better-sqlite3, a native Node addon that required compilation and broke on every major Node upgrade. Bun's built-in bun:sqlite is faster (it uses Bun's optimized FFI layer), requires zero compilation, and the API is nearly identical. Migration was a one-line import change.
Built-in test runner: We replaced Jest in all services. bun test is not just faster (3.1x faster across our test suites), it is meaningfully simpler. No configuration file. No transform pipeline. TypeScript works natively. JSX works natively. The expect() API is Jest-compatible, so we changed import paths and deleted Jest configuration files. We lost some Jest features (custom serializers for snapshot testing, specific mock behaviors) but gained speed and simplicity that more than compensated.
Bun's bundler: For our three client-facing services that serve JavaScript bundles, we replaced esbuild with Bun's built-in bundler. Performance is comparable (esbuild is already extremely fast), but eliminating the dependency simplifies our toolchain. One less binary to install, one less version to track, one less thing that can break.
What Still Does Not Work
Bun is production-ready. It is not perfect. Here is what we still deal with:
node:diagnostics_channel: As mentioned, we replaced this. Bun's implementation exists but is incomplete. If your observability stack relies heavily on diagnostics channels (some APM agents do), you will need workarounds.
Certain node:crypto edge cases: We encountered two issues with ECDSA signature verification where Bun produced different results than Node for malformed inputs. Both were technically undefined behavior (the inputs were invalid), but our ad-verification pipeline needed to handle them identically to Node's behavior. We added explicit input validation before the crypto calls.
node:inspector: Bun supports the Chrome DevTools Protocol, but some programmatic inspector APIs that Node exposes are not available. This affected our production profiling toolchain, which we rebuilt using Bun's built-in profiler (bun --inspect) instead.
Package compatibility: Roughly 1% of npm packages we depend on have issues. Usually it is a package checking process.versions.node or using a Node API in an unusual way. The rate is decreasing with every Bun release, but it is not zero. We maintain a list of known-incompatible packages and check it before adding new dependencies.
Ecosystem maturity: The Bun Discord is the primary support channel. Documentation, while good, does not cover every edge case. When we hit issues, we often read Bun's Zig source code directly. This is fine for our team (we write Zig daily), but it is a real barrier for teams without systems programming experience.
The Decision Framework
Here is when to switch to Bun:
Switch if: You want faster cold starts, lower memory usage, and you are willing to spend a few weeks on migration. Your test suite is comprehensive enough to catch compatibility issues. Your team can debug runtime-level issues when they arise.
Do not switch if: You depend on specific Node APIs in node:vm, node:worker_threads, or node:inspector without being willing to adapt. Your application uses native addons that have not been tested with Bun. You need enterprise support contracts (Bun's commercial offerings are still young).
Switch anyway if: You are starting a new project. There is no reason to start a new TypeScript project on Node in 2026. Bun is strictly better for new projects where you do not carry compatibility baggage.
The Verdict
Twelve months. Fourteen services. 2.3 million requests per second. Zero Bun-caused production incidents after the initial migration. 2.4x faster cold starts. 40% less memory. A simpler toolchain with fewer dependencies.
We spent three engineer-weeks on the migration. We have saved more than that in reduced CI time alone, before counting the operational benefits of lower memory usage and faster scaling.
Bun is not a toy. It is not an experiment. It is a production runtime that is faster, leaner, and simpler than Node.js for every workload we have thrown at it. The compatibility gaps are real but narrow and shrinking. The performance gains are substantial and consistent.
We switched twelve months ago. We are never going back.