Performance
evlog is designed for production. Every operation is benchmarked and tracked against regressions in CI. This page documents the methodology, current numbers, and what they mean for your application.
Methodology
Benchmarks run with Vitest bench (powered by tinybench):
- Each benchmark runs for 500ms after JIT warmup
- Results report ops/sec, mean, p75, p99, p995, and p999 latency
- All benchmarks run in silent mode (
silent: true) to isolate the library overhead from I/O - JSON output is saved for CI comparison between commits
Run benchmarks locally:
cd packages/evlog
bun run bench
Core operations
These benchmarks measure the cost of evlog's fundamental building blocks in isolation.
Logger creation
| Operation | ops/sec | Mean |
|---|---|---|
createLogger() (no context) | ~18M | 0.06µs |
createLogger() (shallow context) | ~19M | 0.05µs |
createLogger() (nested context) | ~18M | 0.06µs |
createRequestLogger() | ~18M | 0.06µs |
Creating a logger is essentially free — it's a closure over a plain object spread. No allocation pressure.
log.set() — accumulating context
| Operation | ops/sec | Mean |
|---|---|---|
| Shallow merge (3 fields) | ~12M | 0.08µs |
| Shallow merge (10 fields) | ~11M | 0.09µs |
| Deep nested merge | ~7M | 0.14µs |
4 sequential set() calls | ~4M | 0.24µs |
set() uses deepDefaults — a recursive merge that preserves existing values. The cost scales linearly with object depth, not breadth.
log.emit() — building the wide event
| Operation | ops/sec | Mean |
|---|---|---|
| Emit minimal event | ~1.6M | 0.6µs |
| Emit with context (typical request) | ~1M | 1.0µs |
| Emit with error | ~62K | 16µs |
| Full lifecycle (create + 3 sets + emit) | ~937K | 1.1µs |
The full request lifecycle — create a logger, accumulate context across 3 set() calls, and emit — costs about 1 microsecond. That's ~0.001ms of overhead per request.
emit with error is significantly slower (~62K ops/sec) because Error.captureStackTrace() is an expensive V8 operation. This is expected — stack trace capture costs ~15µs regardless of library. This only runs when errors are thrown, not on every request.Payload size scaling
| Payload | ops/sec | Mean |
|---|---|---|
| Small (2 fields) | ~1.2M | 0.8µs |
| Medium (50 fields) | ~118K | 8.5µs |
| Large (200 nested fields) | ~27K | 37µs |
Wide events with 50+ fields remain fast. Even at 200 deeply nested fields, emit takes under 40µs — well within budget for any HTTP request.
Formatting
| Mode | ops/sec | Mean |
|---|---|---|
| Silent (event build only) | ~1.4M | 0.7µs |
| JSON serialization (production) | ~1.4M | 0.7µs |
| Pretty print (development) | ~1.4M | 0.7µs |
Raw JSON.stringify (baseline) | ~2M | 0.5µs |
evlog adds roughly 30% overhead over raw JSON.stringify — the difference is new Date().toISOString(), the spread operator for building the WideEvent, and the sampling check.
In development, the pretty printer with ANSI colors runs at the same speed when console output is mocked — the actual I/O (writing to stdout) is the bottleneck, not the formatting logic.
Sampling
| Operation | ops/sec | Mean |
|---|---|---|
shouldKeep() — no match | ~41M | 0.02µs |
shouldKeep() — status match | ~40M | 0.03µs |
shouldKeep() — duration match | ~42M | 0.02µs |
shouldKeep() — path glob match | ~41M | 0.02µs |
| Full emit with head + tail sampling | ~4.8M | 0.2µs |
Sampling adds zero measurable overhead. Both head sampling (random percentage) and tail sampling (condition evaluation) complete in under 30 nanoseconds. Even path glob matching with matchesPattern() is negligible.
Enrichers
| Enricher | ops/sec | Mean |
|---|---|---|
| User Agent (Chrome) | ~2.5M | 0.4µs |
| User Agent (Firefox) | ~4M | 0.25µs |
| User Agent (Googlebot) | ~4.5M | 0.22µs |
| Geo (Vercel headers) | ~5.4M | 0.18µs |
| Geo (Cloudflare) | ~1M | 1.0µs |
| Request Size | ~26M | 0.04µs |
| Trace Context | ~4.6M | 0.22µs |
| All enrichers combined | ~442K | 2.3µs |
| All enrichers (no headers) | ~1.9M | 0.5µs |
Running the full enricher pipeline — User Agent parsing, Geo extraction, Request Size, and Trace Context — adds about 2.3µs per request. The User Agent regex parsing is the most expensive enricher.
When headers are absent (common for internal/health check requests), enrichers short-circuit immediately.
Error handling
| Operation | ops/sec | Mean |
|---|---|---|
createError() (string) | ~216K | 4.6µs |
createError() (full options) | ~208K | 4.8µs |
parseError() (EvlogError) | ~15M | 0.07µs |
parseError() (plain Error) | ~41M | 0.02µs |
| Round-trip (create + parse) | ~164K | 6.1µs |
toJSON() | ~12M | 0.08µs |
JSON.stringify() | ~2.3M | 0.43µs |
createError() costs ~5µs, dominated by V8's Error.captureStackTrace(). parseError() is essentially free — it's just property access.
Real-world overhead
For a typical API request that creates a logger, sets context 3 times, and emits:
| Component | Cost |
|---|---|
| Logger creation | 0.06µs |
3× set() calls | 0.24µs |
emit() (silent) | 0.7µs |
| Sampling evaluation | 0.02µs |
| Full enricher pipeline | 2.3µs |
| Total evlog overhead | ~3.3µs |
That's 0.003ms per request — orders of magnitude below any HTTP framework or database overhead.
CI regression tracking
Every pull request that touches packages/evlog/src/ or packages/evlog/bench/ automatically runs benchmarks against the main branch. A comparison report is posted as a PR comment showing:
- ops/sec delta for every benchmark
- p99 latency changes
- Regressions (>10% slower) are flagged with a warning
This ensures performance never silently degrades across releases.
Running benchmarks
# Run all benchmarks with table output
bun run bench
# Export results as JSON
bun run bench:json
# Compare two benchmark runs
bun bench/compare.ts baseline.json current.json
Benchmark files live in packages/evlog/bench/:
| File | What it measures |
|---|---|
logger.bench.ts | createLogger, log.set(), log.emit(), payload sizes |
format.bench.ts | JSON vs pretty print vs silent mode |
sampling.bench.ts | Head sampling, tail sampling, combined |
enrichers.bench.ts | Per-enricher cost, full pipeline |
errors.bench.ts | createError, parseError, serialization |
Configuration
Complete reference for all evlog configuration options including global logger settings, middleware options, environment context, and framework-specific overrides.
Overview
Send your logs to external services with evlog adapters. Built-in support for popular observability platforms and custom destinations.