Learn Docker With My Newest Course

Dive into Docker takes you from "What is Docker?" to confidently applying Docker to your own projects. It's packed with best practices and examples. Start Learning Docker →

Adding Docker Compose Logs to Your CI Pipeline Is Worth It


Visiblity on what's happening is important to help debug something. Having these logs present all the time can save a lot of time.

Quick Jump: Demo Video

I recently added docker compose logs to always run in my continuous integration (CI) pipeline before I lint, format and test my code. This is one of those things where having it there may save you 5 minutes or even an hour of debugging if something breaks.

I’m sure I don’t need to remind you how important it is to see the values of things to help troubleshoot an issue. Often times printing a variable is enough to see exactly what’s wrong and showing logs in CI is the same idea.

Perhaps your CI run is failing but you can’t reproduce is locally. Often times I’ll end up pushing a commit like “Temporarily enable logging to help debug CI” where I enable logging with intent to disable it afterwards in a follow up commit.

But then I started to think to myself. Why not keep it around all the time? CI bills tend to be based on build minutes not the size of the CI log. Your containers are already producing the logs, docker compose logs will only show them to you so there’s no real time penalty.

Here’s a snippet from one of my shell scripts that I run in CI. You can find this in my example Docker Flask app repo, notice the line about logs about 75% of the way down in the file:

shellcheck run bin/*
lint:dockerfile "${@}"

cp --no-clobber .env.example .env

docker compose build
docker compose up -d

# shellcheck disable=SC1091
. .env
wait-until "docker compose exec -T \

docker compose logs

lint "${@}"
format --check
flask db reset --with-testdb
test "${@}"

I’ve gone over that wait-until script in another blog post btw.

How I Arrived at Keeping These Logs in Full Time

I was working on a project recently where I had it all working locally but CI kept failing with a web container not found error with no other details.

I have a habit of staging code to commit with git add -p which I’ve made videos about in the past. One fun side effect of this is -p will not add untracked files.

What ended up happening was I forgot to commit a new file I had added and it wasn’t immediately obvious by only seeing web container not found. However, after I added the log output I got a nice full stack trace from my web container that said it couldn’t find a module.

5 seconds later I made the association that the only difference between local where it works and CI where it doesn’t work is that module exists locally. So then I ran a git status and yep, the file with that module was untracked. As soon as I added, committed and pushed it up then CI passed. It was a pretty derptastic debug session that took a solid 10 minutes.

Demo Video


  • 0:14 – Going over the CI process in code
  • 1:16 – Checking out the output in GitHub Actions
  • 2:19 – How I came to realize having logs all the time is great
  • 4:04 – Seeing the log output in CI
  • 5:21 – What happened to me specifically
  • 6:43 – Let’s keep the logs in all the time


Do you think you’ll be adding this to your pipeline? Let me know below!

Never Miss a Tip, Trick or Tutorial

Like you, I'm super protective of my inbox, so don't worry about getting spammed. You can expect a few emails per month (at most), and you can 1-click unsubscribe at any time. See what else you'll get too.