Internative Logo
Dragonfly

Databases & Data Technologies

Dragonfly

Dragonfly is a drop-in replacement for Redis and Memcached with a modern multi-threaded architecture. It delivers up to 25× higher throughput on a single node, vertical scaling without sharding, and full Redis API compatibility.

What is it?

Dragonfly is an open-source in-memory data store that speaks the Redis and Memcached protocols but is built on a shared-nothing, multi-threaded architecture inspired by modern databases like Seastar and ScyllaDB. It runs on Linux and Docker and scales vertically to dozens of cores on a single node.

What does it do?

Dragonfly provides key-value storage, pub/sub, streams, sorted sets, and all the data structures Redis applications depend on — at a fraction of the server count. Benchmarks show 3.8 million QPS on a single c6gn.16xlarge, versus ~250k for Redis, while using less memory thanks to a compact dashtable implementation.

Where is it used?

Dragonfly slots into existing Redis deployments as a drop-in replacement: session caches, API rate limiters, feature-flag stores, real-time leaderboards, job queues, and pub/sub fanout. Teams adopt it when a Redis cluster has outgrown a single primary and they want to avoid the operational cost of sharding.

When & why it emerged

Dragonfly was released by ex-Google engineers in 2022 to solve the fundamental bottleneck of Redis: single-threaded event-loop architecture. Modern servers routinely have 64+ cores and 1TB of RAM; Redis can use only one of those cores per instance, forcing horizontal scale-out even when vertical scale would be simpler and cheaper.

Why we use it at Internative

We deploy Dragonfly for clients whose Redis bills are climbing faster than their revenue — typical SaaS businesses with cache-heavy workloads. One Dragonfly node often replaces a 5-to-10-node Redis cluster, dropping hardware cost and on-call complexity in the same migration. Zero application code changes thanks to protocol compatibility.