TypeScript, Zig, Gleam: The Three-Language Stack
One language for humans to read. One language for machines to execute. One language for systems to stay alive. This is our stack.
There is a popular idea in software engineering that a company should use one language for everything. The argument is simple: reduce cognitive overhead, share libraries, make hiring easier. We considered this idea carefully and rejected it completely.
One language everywhere produces slow, unreliable software. It forces you to use a tool designed for one problem domain to solve all problem domains. The result is either a fast language that is painful for UI work, a productive language that is unacceptably slow on the critical path, or a "good enough at everything" language that excels at nothing. We refuse all three options.
Anokuro uses three languages. Each one occupies a distinct niche defined by what it must optimize for. There is no overlap.
TypeScript: The Human Language
TypeScript handles everything a human needs to read, write, and iterate on quickly. This means:
-
Dashboards and internal tools. Our observability dashboards, ad campaign management interfaces, creative preview tools, and reporting UIs are all TanStack Start applications written in TypeScript. These are complex applications with dozens of interactive components, real-time data streams, and intricate state management. TypeScript's type system catches an enormous class of bugs at compile time, and the React ecosystem gives us battle-tested UI primitives.
-
This blog and the marketing site. The site you are reading is a TanStack Start application with content-collections for blog content. TypeScript, Tailwind CSS v4, TanStack Router. The entire frontend is TypeScript.
-
Developer tooling and scripts. Build scripts, CI/CD configuration generators, data migration tools, one-off analysis scripts. TypeScript running on Bun is our default for anything that an engineer writes and runs locally.
-
API clients and SDKs. Our customers integrate with our platform through TypeScript SDKs. The SDK types are generated from our API schema, giving customers compile-time validation of their integration code.
TypeScript's role is human productivity. We do not care that TypeScript is slower than Zig for numerical computation. We do not care that V8's garbage collector introduces occasional pauses. These things do not matter for dashboards and tooling. What matters is that an engineer can read a TypeScript file and understand it immediately, modify it safely with type checking, and ship a change in minutes. TypeScript is the best language in the world at this. Nothing else is close for web-facing applications.
The performance boundary is clear: TypeScript never runs on the ad-serving hot path. It never touches a bid request. It never processes a real-time event stream. The moment a workload involves microsecond-level latency requirements or sustained high-throughput data processing, it leaves TypeScript's domain and enters Zig's.
Zig: The Machine Language
Zig handles everything where nanoseconds equal revenue. This means:
-
AnokuroDB. Our custom database is written entirely in Zig. The storage engine, memtable, compaction, WAL, block cache, and binary protocol parser. Zig's manual memory management with arena and pool allocators gives us deterministic performance with zero GC pauses. The comptime generics produce specialized code for each data type without runtime polymorphism overhead.
-
The HTTP request parser. Bid requests arrive as HTTP POST bodies with JSON or Protobuf payloads. Our request parser is a hand-written Zig state machine that parses HTTP headers and deserializes bid request payloads in a single pass, writing directly into pre-allocated request structs. It processes a typical bid request in 1.4 microseconds. The fastest off-the-shelf HTTP parser we benchmarked (llhttp, used by Node.js) takes 3.8 microseconds for the same payload.
-
Serialization and deserialization. All binary protocol encoding/decoding for inter-service communication. Our custom binary format is designed for zero-copy deserialization where possible — the parser validates the byte layout and returns a pointer into the receive buffer rather than allocating new memory and copying.
-
The bid-serving hot path. The actual auction logic that receives a parsed bid request, queries AnokuroDB for user segments, evaluates bidding rules, selects a creative, and serializes the response. This is approximately 400 lines of Zig that runs 200,000 times per second with a p999 of 8ms for the entire pipeline.
Zig's role is machine execution speed. We do not write dashboards in Zig. We do not write CLI tools in Zig. We do not use Zig where a garbage collector would be acceptable. Zig code is harder to write than TypeScript. It requires manual memory management, explicit error handling at every call site, and careful attention to alignment and layout. This cost is justified only when the workload demands deterministic, nanosecond-level performance.
The performance numbers justify the complexity cost every time we measure them. Our Zig bid-serving process uses 14MB of resident memory, all allocated at startup. It handles 200k req/s on 8 cores. The equivalent service written in Go (which we prototyped in August 2025) required 340MB of memory, used 14 cores for the same throughput, and had a p999 of 23ms due to GC pauses.
Gleam: The Survival Language
Gleam handles everything that must stay alive under partial failure. This means:
-
Ad orchestration. The service that coordinates bid requests across multiple demand partners, manages timeout and fallback logic, and aggregates responses. This is inherently a concurrency problem: we fire parallel requests to multiple bidding services, collect responses with per-partner timeouts, and merge the results. The BEAM's lightweight processes and built-in
receivewith timeout make this trivial to express correctly. -
Distributed coordination. Campaign budget tracking across multiple serving instances. Frequency capping coordination (ensuring a user does not see the same ad more than N times across different serving nodes). Rate limiting. These are distributed state problems where partial failure is the normal operating condition, not an edge case.
-
The trace aggregation service. As we described in our observability post, the Gleam aggregation service processes 12 million trace events per second. It has run for four months without a manual restart. Individual processes crash and restart automatically when they encounter malformed data. The system as a whole never goes down.
-
Queueing and backpressure. All message queuing between services runs through Gleam processes. When a downstream service is slow or unreachable, Gleam processes accumulate messages in their mailboxes, apply backpressure through
selective receive, and shed load gracefully. There is no external message broker. The BEAM is the message broker.
Gleam's role is staying alive. The BEAM virtual machine was designed over 35 years ago for telephone switches that must achieve 99.999% uptime. Its concurrency model — lightweight processes with no shared memory, communicating via message passing, supervised by parent processes that restart children on failure — is unmatched for building systems that tolerate partial failure without operator intervention.
We chose Gleam specifically over Erlang and Elixir because Gleam has a sound static type system. Erlang's dynamic typing means entire classes of bugs (wrong message format, missing field in a record, incorrect function arity) are caught at runtime instead of compile time. In a system that processes 12 million events per second, "caught at runtime" means "this bug causes 4,000 process crashes per second until someone notices." Gleam's type system catches these at compile time. The BEAM's supervision trees are still there as a safety net, but the net catches fewer things because the type system already prevented them.
How They Communicate
The boundaries between languages define the communication protocols:
Zig to Gleam: Custom binary protocol over TCP. The Zig serving layer sends bid results, trace events, and health signals to the Gleam orchestration layer using a custom binary protocol. The format is defined in a shared specification document. Both sides implement serialization manually — there is no code generation from a shared schema because the protocol has exactly 8 message types and changes less than once per quarter. The overhead is negligible: 200 nanoseconds to serialize a bid result on the Zig side, and the BEAM's binary pattern matching makes deserialization in Gleam natural and fast.
Gleam to TypeScript: Typed REST and WebSocket. The Gleam orchestration services expose REST APIs consumed by TypeScript dashboards and tooling. The API schemas are defined in OpenAPI and we generate TypeScript types from them. WebSocket connections stream real-time data (trace percentiles, campaign budget consumption, bid volume) to dashboards. The Gleam side uses the mist HTTP server. The TypeScript side uses generated fetch clients with full type safety.
Zig to TypeScript: They do not talk directly. There is no Zig-to-TypeScript communication path. Every interaction between the serving layer and the human-facing layer goes through Gleam. This is deliberate. Gleam is the coordination layer. It manages state, applies business logic, and handles failure. Zig serves bids. TypeScript renders UIs. They have no reason to communicate directly, and allowing them to would bypass the fault-tolerance guarantees that Gleam provides.
Why No Fourth Language
We are regularly asked why we do not use Go. The honest answer: Go occupies the same niche as Gleam but does it worse for our workload. Go's goroutines are excellent for concurrent I/O. The BEAM's processes are excellent for concurrent I/O and fault tolerance and hot code reloading and distributed communication between nodes. Go's garbage collector is good. Zig's lack of a garbage collector is better when you need deterministic latency. Go fills a middle ground that we do not have a middle-ground use case for.
We are also asked about Rust. Rust would replace Zig for the performance-critical path, but as we detailed in our database post, the borrow checker's cost in development velocity is not justified for our team. Zig gives us equivalent runtime performance with faster iteration speed for systems programming.
The discipline to maintain a three-language stack is the discipline to say no. Every engineer who joins Anokuro learns where each language boundary is and why it exists. The boundaries are not arbitrary. They are derived from the physics of our problem: ad serving has a latency budget measured in milliseconds, distributed coordination must tolerate partial failure, and humans need productive tools to build interfaces. No single language satisfies all three constraints. Three languages, each the best in its domain, is the minimum viable stack.
We will not add a fourth language. If a new problem domain emerges that none of our three languages handle well, our first response will be to question whether we actually need to solve that problem. Our second response will be to find a way to solve it within the existing stack. Only if both fail — and they have not yet — would we consider expanding the stack.
Three languages. Three problem domains. Zero compromise in any of them. This is the Anokuro stack.