Docker Tip #69: Avoid Running Out of Disk Space from Container Logs
By default, Docker's container log files will consume your entire disk unless you tell it not to. Here's how to cap log disk space.
Let me just preface this with you probably don’t need to panic about Docker container logs taking up all of your disk space. That’s because when you remove a container (not just stop, but remove), its log files will be deleted.
But still, it’s a good idea to limit it because who wants to run out of disk space in production?
Let’s quote Docker’s documentation (as of v18.06):
By default, Docker captures the standard output (and standard error) of all your containers, and writes them in files using the JSON format.
So right away we know that Docker is managing its own log files for containers.
If we look further down in the docs we can see that the max-size
option defaults
to -1
which means the log’s file size will grow to unlimited size. There’s also
the max-file
option which defaults to keeping around 1 log file.
We can change this by editing the daemon.json
file which is located in
/etc/docker
on Linux. On Docker for Windows / Mac you can open up your settings
and then goto the “Daemon” tab and flip on the “Advanced” settings.
Add this into your daemon.json to cap your container logs to 10gb (1,000x 10mb files):
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "1000"
}
Then restart Docker and you’ll be good to go.
If you’re installing Docker on Debian or Ubuntu with Ansible then you should check out my Docker role which sets this up for you automatically.