Issue: Date: 2026-01-30T19:52:35-08:00



Summary

SQLite can scale significantly further than commonly assumed through vertical scaling (hardware upgrades), WAL mode optimizations, and distributed read replicas via platforms like Turso. The theoretical limit is 281TB, and production systems have demonstrated millions of queries per second on a single high-end server. However, concurrent write performance remains the primary constraint.


Key Findings

1. Vertical Scaling (Hardware Optimization)

Database Size:

  • Theoretical maximum: 281 terabytes (2^48 bytes)
  • Practical recommendation: Systems remain performant well into multiple terabytes with proper optimization
  • Default limit: 64TB (can be extended to 140TB)

Hardware Performance:

  • Real-world benchmark: A single server with 192 CPU cores, 1TB RAM, and 3TB NVMe SSD achieved 4 million queries per second on SQLite
  • Modern servers now support up to 8TB RAM, hundreds of cores, and NVMe SSDs with performance approaching RAM speeds
  • CPU scaling is nearly linear — adding physical cores increases performance proportionally
  • NVMe SSD performance dominates; even with 25% disk reads, performance impact is negligible

Optimization techniques for single-machine scaling:

  • Enable WAL (Write-Ahead Logging) mode for 4x write speed improvement and concurrent read/write capability
  • Use PRAGMA synchronous = NORMAL in WAL mode (avoids costly fsync operations)
  • Batch multiple writes into a single transaction
  • Tune page cache and memory settings based on available RAM

2. Read Scaling via Replication (Turso & Similar Platforms)

Turso enables geographic and logical read distribution through Embedded Replicas:

How it works:

  • Local SQLite replicas synchronize with a primary database using Write-Ahead Log frames
  • Reads are served from local replicas (microsecond latency)
  • Writes are forwarded to the primary database
  • Sync interval is configurable (default ~60 seconds for edge CDN patterns)

Scalability benefits:

  • Deploy replicas in any region for zero-latency reads
  • Unlimited embedded replicas on all plans (including free tier)
  • Implement a “database CDN” architecture with edge replicas
  • Effective strategy: Identify “hot” (frequently-read) data and replicate that specifically

Limitations:

  • Replicas have eventual consistency (not strong consistency)
  • Writes still route through a single primary database

3. Concurrent Write Constraints

SQLite’s primary scalability bottleneck is concurrent writes:

Core limitation:

  • Only one writer at a time (global database lock)
  • Multiple concurrent write attempts trigger “database is locked” errors
  • No FIFO guarantee in lock acquisition

Potential improvements:

  • BEGIN CONCURRENT mode (experimental in newer SQLite versions) — transactions hold locks only at commit time, comparing page versions to detect conflicts
  • WAL mode enables concurrent readers during write transactions (but still single writer)

Recommendation for write-heavy workloads:

  • If you need many concurrent writers, migrate to a client/server database (PostgreSQL, MySQL)

4. Other Scaling Approaches Not Mentioned in Original Question

  • Connection pooling & batch operations: Group small operations into larger transactions
  • Read-only replicas: Beyond Turso, you can manually replicate to PostgreSQL or other systems for specific read-heavy queries
  • Sharding: Run multiple independent SQLite databases (one per user/tenant) and query appropriate database based on key
  • Hybrid approaches: Keep operational data in SQLite, archive old data to object storage or data warehouses

Recommendations

Use SQLite at scale when:

  • Your workload is read-heavy or write-sparse
  • You have a single primary writer or can batch writes
  • You want to minimize operational complexity
  • Low-latency local queries are valuable (mobile, edge, local-first apps)

Consider alternatives when:

  • You need thousands of concurrent writers
  • Strong consistency across distributed nodes is critical
  • Your database is fundamentally distributed

Optimization path for scaling:

  1. Start with WAL mode and hardware tuning (achieves 100k+ SELECTs/sec easily)
  2. For read scaling, add Turso embedded replicas
  3. If write concurrency becomes a bottleneck and can’t be batched, plan migration to PostgreSQL

Sources