Why We Left Rust for Zig

The borrow checker is a marvel of engineering. It also tripled our iteration speed on database internals. We switched.

By Anokuro Engineering··Infrastructure

We wrote the first version of our storage engine in Rust. We rewrote it in Zig. This post explains why. It is not a Rust hit piece -- Rust is a remarkable language and we still use it for other projects. But for database internals specifically, Zig was the right tool for us.

The Rust Prototype: What Worked

Our Rust prototype was not a disaster. Large parts of it were excellent.

Fearless concurrency was real. We had a multi-threaded buffer pool with shared page latches, a concurrent skip list for the memtable, and an async I/O layer. The type system caught data races at compile time that would have been weeks of debugging in C. We shipped a prototype in three months with exactly zero concurrency bugs in production. That is not nothing.

Cargo is the best build system in any language. Dependency management, feature flags, conditional compilation, benchmarking, documentation generation -- all first-class. We miss Cargo every single day. Zig's build system is powerful but the package ecosystem is still early.

The ecosystem was deep. tokio, serde, bytes, crossbeam -- we had high-quality implementations of everything we needed. Community crates saved us months of work on the networking layer.

What Did Not Work

The problems were specific and severe.

Self-referential structures in our B-tree. Our B-tree nodes contain internal pointers -- a node holds references to its children, and during a split operation, a parent node needs to reference both the old and new child simultaneously while also being modified. In Rust, this is a nightmare. We tried Pin<Box<Node>>, unsafe blocks with raw pointers, Rc<RefCell<Node>>, and the ouroboros crate. Every approach was either unsafe, slow (runtime borrow checking via RefCell), or contorted beyond recognition.

The final Rust B-tree insertion function was 340 lines. Forty percent of those lines existed solely to satisfy the borrow checker. The equivalent Zig function is 198 lines, and every line does actual work.

Async Rust compile times destroyed our iteration speed. Our project had 47 crates in a workspace. A single-line change in the storage engine crate triggered a 12-minute incremental rebuild because of async trait monomorphization. We tried cargo-nextest, sccache, cranelift backend, and splitting crates further. The best we achieved was 8 minutes. For a database engine where you modify a function, run a benchmark, modify again, and repeat for hours, 8-minute feedback loops are unacceptable.

We timed a full day of development. One engineer spent 3 hours and 47 minutes waiting for the Rust compiler. The same engineer, after the Zig rewrite, spent 22 minutes waiting for compilation on the same tasks.

Async trait limitations. At the time of our prototype (early 2025), async traits had stabilized but the ergonomics were still rough. We needed Send + Sync + 'static bounds everywhere. Dynamic dispatch on async traits required Box<dyn Future>, which meant heap allocations in our I/O hot path. The async_trait macro added indirection that made profiling opaque -- flame graphs showed anonymous futures instead of function names.

Lifetime annotations in graph structures. Our buffer pool uses a clock-sweep eviction algorithm where pages hold back-pointers to a descriptor table, which holds pointers to pages. This bidirectional reference pattern is fundamentally adversarial to Rust's ownership model. We ended up using unsafe in 23 locations in the buffer pool alone. At that point, we were writing C with extra syntax.

The Zig Rewrite

We rewrote the storage engine over six weeks with three engineers. Here is what changed.

40% fewer lines of code. The Rust prototype was 31,400 lines (excluding tests). The Zig rewrite is 18,900 lines. The reduction comes almost entirely from removing borrow checker annotations, trait boilerplate, and lifetime gymnastics. The actual logic -- the algorithms, the data structures, the I/O paths -- is nearly identical.

8x faster compilation. A full debug build of the Zig storage engine takes 4.2 seconds. An incremental rebuild after modifying a core file takes 0.8 seconds. The Rust equivalent was 34 seconds full and 8 minutes incremental. This is not a minor convenience improvement. It changes how you develop. With sub-second rebuilds, you run benchmarks after every change. You experiment freely. You try five approaches to a problem instead of two because the compile-test loop is instant.

Direct memory layout control. In Zig, when we write:

const BTreeNode = extern struct {
    keys: [ORDER - 1]u64,
    children: [ORDER]*BTreeNode,
    count: u16,
    is_leaf: bool,
    _padding: [5]u8,
};

We know the exact byte layout. We control alignment, padding, and field ordering. @offsetOf gives us compile-time field offsets. This matters for a storage engine because the on-disk format is the in-memory format -- there is no serialization layer. A page on disk is @ptrCast to a struct pointer. If you do not control the memory layout to the byte, you cannot do this.

Rust's repr(C) provides similar guarantees, but Zig makes it the natural way to write structs rather than an annotation you remember to add.

comptime replaces macros and generics. Our WAL serialization uses comptime to generate type-specific serialization code at compile time:

fn walSerialize(comptime T: type, value: T) []const u8 {
    comptime {
        for (std.meta.fields(T)) |field| {
            if (@sizeOf(field.type) == 0) @compileError("zero-size fields not allowed in WAL records");
        }
    }
    return std.mem.asBytes(&value);
}

This is not a proc macro. It is regular Zig code that runs at compile time. It is debuggable, readable, and produces clear error messages. Rust's proc macros are a separate language with separate tooling. Zig's comptime is just Zig.

What Rust Does Better

We are honest about the tradeoffs.

Rust's error handling is superior. Result<T, E> with the ? operator is genuinely elegant. Zig's error unions are fine, but the lack of map, and_then, and combinator chains means error transformation is more verbose. We write more lines of error handling code in Zig.

The Rust ecosystem is 10x larger. When we need an HTTP client, a JSON parser, a TLS library, a logging framework -- Rust has battle-tested options. Zig's ecosystem requires us to write more ourselves or use C libraries through Zig's C interop (which, to be fair, is excellent).

Rust documentation is world-class. The Rust Book, Rustonomicon, standard library docs, and community blogs form the best documentation corpus of any systems language. Zig's documentation is improving but there are still areas where the answer is "read the source code" or "ask on Discord."

Rust's unsafe boundary is valuable. In Zig, all memory is manually managed and all pointer operations are available everywhere. Rust's clear boundary between safe and unsafe code is genuinely useful for auditing. We compensate in Zig with aggressive use of std.debug.assert, safety-checked builds in CI, and code review discipline -- but it is not the same as a compiler-enforced boundary.

The Conclusion

Rust is a better general-purpose systems language. If we were building a web service, a CLI tool, a networking proxy, or most other systems software, we would use Rust without hesitation.

But we are building a database storage engine. This means we need self-referential data structures, precise control over memory layout, and sub-second compilation for rapid iteration. These specific requirements make Zig the better tool for this specific job.

The industry has a habit of turning language choices into identity. We refuse. We use TypeScript for our web applications. We use Gleam for our bid processing pipeline. We use Zig for our storage engine. Each choice is made on the specific technical merits for the specific problem domain.

If Zig's compile times regressed to minutes, or if Rust's borrow checker added first-class support for graph structures, we would re-evaluate. Tooling loyalty is a liability. Results are what matter.

Our storage engine processes 1.4 million operations per second. It compiles in 4.2 seconds. When we change a function, we see benchmark results in under a second. That is the engineering environment we need, and Zig is what gives it to us.

Copyright © 2026 Anokuro Pvt. Ltd. Singapore. All rights reserved.