Learn Docker With My Newest Course

Dive into Docker takes you from "What is Docker?" to confidently applying Docker to your own projects. It's packed with best practices and examples. Start Learning Docker →

Measuring Redis Network Latency and the Stability of Your Server


Redis has 2 commands to help you discover how fast or slow it is to connect to your Redis server and how good your Redis server is.

Quick Jump: Demo Video

For reference the commands below were run on my dev box while running inside of Docker.

Network latency: You can test how long it takes your client to connect to Redis by running the following command on the server that’s connecting to Redis (AKA the client). The results are measured in milliseconds:

$ redis-cli --latency
min: 0, max: 1, avg: 0.21 (4017 samples)

This command will perform a Redis PING command and measure how long it takes to get a response. In the above case it ran 4,017 times and on average it took 210 microseconds to get a response. The min is the best reported time and the max is the worst.

Server stability: Redis also has an “intrinsic latency” test which is meant to be run on the server that Redis is running on. In fact, according to their docs the command below doesn’t even connect to your Redis server, it’s aimed at profiling your Redis server to see how consistently your server is giving CPU time to Redis:

$ redis-cli --intrinsic-latency 100
Max latency so far: 1 microseconds.
Max latency so far: 11 microseconds.
Max latency so far: 23 microseconds.
Max latency so far: 697 microseconds.
Max latency so far: 1033 microseconds.
Max latency so far: 1733 microseconds.
Max latency so far: 3613 microseconds.

2350622859 total runs (avg latency: 0.0425 microseconds / 42.54 nanoseconds per run).
Worst run took 84928x longer than the average latency.

The basic idea is the lower and more consistent the numbers are the better. Generally speaking you can only expect Redis to be as fast as its slowest time if you were trying to determine hard latency requirements.

It’s expected these numbers will be pretty irregular, especially on cloud providers where you’re sharing CPU time with other hosts on the system. In the above case the average was 42 nanoseconds per run. Even in the worst case scenario it was 3.6 milliseconds.

Keep in mind the above are results from running on a ~7 year old dev box with an Intel i5 3.2ghz CPU where the Redis server is running inside of Docker through WSL 2.

You can quickly test things on your machine or server if you have Docker by running:
# Start the Redis server:
docker container run --rm -it --name redis redis:7.0.0-bullseye

# In a second terminal, test the network latency:
docker exec redis redis-cli --latency

# In a second terminal, test the intrinsic latency:
docker exec redis redis-cli --intrinsic-latency 100

Demo Video


  • 0:36 – It’s nice to be able to measure Redis’ latency
  • 1:23 – Running Redis locally through Docker
  • 2:03 – Checking the network latency between your client and server
  • 4:00 – What is the Redis client?
  • 4:52 – Measuring the intrinsic latency of your Redis server
  • 7:00 – Going over the intrinsic latency results and what it means

Have you ever seen Redis be a bottleneck in your app? Let me know below.

Never Miss a Tip, Trick or Tutorial

Like you, I'm super protective of my inbox, so don't worry about getting spammed. You can expect a few emails per month (at most), and you can 1-click unsubscribe at any time. See what else you'll get too.