We Replaced ESLint with oxlint and Our CI Got 23x Faster

Life is too short to wait for JavaScript tooling written in JavaScript.

By Anokuro Engineering··Tooling

Our ESLint setup took 8.3 seconds cold and 2.1 seconds cached. We had 47 plugins. Forty-seven. Every one of them added for a good reason at some point, and collectively they turned lint into the slowest step in our CI pipeline. Slower than TypeScript compilation. Slower than tests. Slower than building the actual production bundle.

We replaced it all with oxlint. Cold runs now take 360ms. Cached runs take 89ms. That is a 23x improvement cold and a 24x improvement cached. Our CI pipeline dropped from 4 minutes 12 seconds to 3 minutes 6 seconds. On every pull request. For every engineer. Every day.

Life is too short to wait for JavaScript tooling written in JavaScript.

The Before: 47 Plugins of Pain

Over three years, our ESLint configuration grew into a monster. Here is a partial list of what we were running:

  • @typescript-eslint/eslint-plugin (73 rules enabled)
  • eslint-plugin-react and eslint-plugin-react-hooks
  • eslint-plugin-jsx-a11y
  • eslint-plugin-import (the single slowest plugin, responsible for ~40% of lint time)
  • eslint-plugin-unused-imports
  • eslint-plugin-promise
  • eslint-plugin-unicorn (42 rules)
  • eslint-plugin-sonarjs
  • eslint-plugin-security
  • Various internal plugins for ad-component patterns

The eslint-plugin-import situation deserves special mention. This single plugin accounted for 3.2 seconds of our 8.3 second cold run. It resolves every import in every file to verify they exist and are correctly ordered. Noble goal. Catastrophic implementation. It spawns its own module resolution that is completely separate from TypeScript's, leading to both false positives and glacial performance.

We measured lint times across our monorepo (387 TypeScript files, ~94,000 lines):

| Metric | ESLint | oxlint | |--------|--------|--------| | Cold run | 8.3s | 360ms | | Cached run | 2.1s | 89ms | | Memory peak | 847MB | 63MB | | Parallelism | Single-threaded JS | Multi-threaded Rust |

Those memory numbers are not typos. ESLint peaked at 847MB parsing our codebase. oxlint used 63MB. On CI runners with 4GB of RAM, this matters.

The Migration

We spent one engineer-week on the migration. The process:

Step 1: Audit existing rules. We exported every enabled ESLint rule and categorized them:

  • Direct equivalent in oxlint: 127 rules. These mapped one-to-one. oxlint implements most of @typescript-eslint, eslint-plugin-react, eslint-plugin-unicorn, eslint-plugin-jsx-a11y, and eslint-plugin-import rules natively.
  • Close equivalent: 18 rules. Slightly different semantics but same intent. We tested each against our codebase and confirmed the oxlint version caught the same issues.
  • No equivalent, still needed: 9 rules. These were either highly specific to our codebase or from niche plugins.
  • No equivalent, not needed: 34 rules. Rules we had enabled but that never actually caught real bugs. We had data: three years of git history showed zero commits where these rules prevented an actual issue.

Step 2: Drop the 34 unnecessary rules. This was the easiest part. We removed them from ESLint first, ran for two weeks, confirmed nothing caught fire.

Step 3: Write custom oxlint rules for the 9 we still needed. oxlint supports custom rules via its plugin system. More on this below.

Step 4: Switch. We replaced the ESLint config with an oxlint config, updated CI, and deleted 1,247 lines of .eslintrc.js. The PR that removed our ESLint configuration was the most satisfying code deletion of the year.

Step 5: Run both in parallel for one week. We ran ESLint and oxlint side-by-side in CI (non-blocking for ESLint) to catch any discrepancies. We found 3 cases where oxlint was stricter than ESLint (it caught real bugs ESLint missed) and 0 cases where ESLint caught something oxlint did not.

Custom oxlint Rules for Ad Components

We have two custom rules that are specific to our ad-serving codebase:

anokuro/type-safe-route-params: Our TanStack Router setup uses typed route parameters. This rule ensures that any component accessing route params uses our typed useAdRouteParams() hook instead of the generic useParams(). The generic version returns Record<string, string>, which has caused production bugs where someone accessed params.campaignId (camelCase) when the actual param was params.campaign_id (snake_case). With the typed hook, this is a compile-time error.

// Bad - caught by our custom rule
const { campaignId } = useParams()

// Good - typed, autocompleted, compile-time checked
const { campaign_id } = useAdRouteParams()

anokuro/no-state-in-ad-renders: Ad render components must be pure functions of their props. Any use of useState, useReducer, or useRef inside a component in the src/ads/ directory triggers this rule. Ad components render in iframes with strict performance budgets (sub-16ms render time), and local state introduces re-render cycles that blow those budgets. All state management for ads goes through our server-driven props pipeline.

Writing these rules took about a day. oxlint's AST visitor API is straightforward if you have written Rust. For teams that do not write Rust, this is the one genuine disadvantage of oxlint: custom rules require Rust, whereas ESLint custom rules are JavaScript. For us this was not a barrier since half our tooling is in Rust or Zig already.

Why oxlint Is Fast

oxlint is part of the oxc project (Oxidation Compiler), which is building a complete JavaScript toolchain in Rust. The performance comes from three things:

  1. Rust, not JavaScript. Parsing and analyzing JavaScript in a language with zero-cost abstractions, no garbage collector, and predictable memory layout is fundamentally faster than doing it in JavaScript. This is not a matter of optimization. It is a matter of physics.

  2. Purpose-built parser. oxc's parser is written specifically for tooling, not for execution. It produces an AST optimized for analysis (compact memory representation, cache-friendly node layout) rather than for code generation. ESLint uses espree (or @typescript-eslint/parser), which was designed for a different set of trade-offs.

  3. Parallel by default. oxlint lints files in parallel across all available CPU cores. ESLint is single-threaded. On our 8-core CI runners, this alone accounts for roughly a 5-6x speedup before you even consider the per-file performance difference.

The 13x memory reduction (847MB to 63MB) is equally important. Lower memory usage means better cache utilization, less GC pressure (well, no GC pressure), and the ability to run lint alongside other CI steps without OOM kills.

The Broader Argument: Rewrite Everything

oxlint is part of a pattern. The entire JavaScript ecosystem is being rewritten in faster languages:

  • oxc (Rust): Parser, linter, transformer, resolver. Replaces ESLint, Babel, and webpack's resolver.
  • Biome (Rust): Formatter and linter. Replaces Prettier and ESLint.
  • Bun (Zig): Runtime, bundler, package manager, test runner. Replaces Node, webpack, npm, and Jest.
  • Turbopack (Rust): Bundler. Replaces webpack.
  • SWC (Rust): Compiler. Replaces Babel.
  • Lightning CSS (Rust): CSS parser, transformer, minifier. Replaces PostCSS and cssnano.

Every tool in this list is 10-100x faster than what it replaces. This is not incremental improvement. This is a generational shift.

The JavaScript ecosystem spent fifteen years building developer tools in JavaScript because it was convenient: one language for everything, easy to contribute, npm distribution. That convenience came with a cost measured in millions of cumulative developer-hours spent waiting for tools to finish. A cold npm install && eslint . && prettier --check . && tsc && webpack on a medium-sized project routinely takes 30-60 seconds. The Rust/Zig equivalents do the same work in 2-3 seconds.

We adopted this philosophy early. Our stack is: Bun for runtime, oxlint for linting, Biome for formatting, SWC via Vite for compilation, and Bun's built-in bundler for production builds. Our entire CI lint+format+typecheck+build step takes 47 seconds. Teams our size with traditional JS tooling routinely report 5-10 minutes for the same steps.

The Numbers That Matter

Since switching to oxlint three months ago:

  • CI time saved per PR: 66 seconds (lint step reduction)
  • PRs per week: ~140
  • Weekly CI time saved: 2.6 hours
  • Monthly CI compute cost reduction: $340 (fewer billed CI minutes)
  • Developer interruption reduction: Immeasurable but significant. The difference between a 2-second lint and an 8-second lint is the difference between flow state and context switch.

The $340/month in CI costs is irrelevant. The developer experience improvement is everything. When linting is instant, engineers lint constantly. They lint on save. They lint before committing. They lint for fun. When linting takes 8 seconds, engineers disable it locally and rely on CI to catch issues, which means slower feedback loops and more back-and-forth on pull requests.

Fast tools change behavior. Changed behavior changes code quality. That is the real ROI of replacing ESLint with oxlint, and it is not something you can put in a spreadsheet.

Copyright © 2026 Anokuro Pvt. Ltd. Singapore. All rights reserved.