A Guide for Running Rails in Docker
We'll go over an example app using Puma, Sidekiq, Action Cable, Postgres, Redis, Hotwire, esbuild and TailwindCSS.
Prefer video? There’s a ~1 hour video version of this blog post on YouTube that goes into a bit more detail about certain topics listed below.
Most of these tips will apply to any Rails 7+ app. Whether you’re using Postgres, MySQL or SQLite is all the same. Likewise, some tips are around setting up a Node environment but if you want to use import maps without Node then you can skip the Node bits, etc..
The app we’re going to cover is at: https://github.com/nickjj/docker-rails-example
You can use that app as a starter app / template for your own apps or reference the bits and pieces you care about. I’ve been keeping it updated for years.
If you prefer using the new built in Docker support from Rails 7.1+, that’s fine too. Everything applies to that as well, the only difference is you’ll end up making lot more manual adjustments since that project doesn’t use Docker Compose or a number of optimizations in its Dockerfile.
This post will be a combination of general Docker tips applied to a Rails app and Rails specific things you’ll want to take into consideration. For the general tips I’ll try my best to link to existing blog posts to fill in the details while tying it into Rails here.
Besides this post, there’s the README file of the example app and 100+ posts I’ve written in the past covering assorted Docker topics.
# Running Your Services in Docker Compose
Besides running your Rails web app, chances are you’ll also be running:
- Postgres or MySQL
- Redis
- Maybe a background worker such as Sidekiq, Resque or GoodJob
- A dedicated Action Cable process
- JS / CSS file watchers
Docker Compose lets you run multiple containers, similar to how you might run multiple processes using foreman with a Procfile.
Docker lets you have dev / prod parity in the sense you can have a single
Dockerfile
and docker-compose.yml
power your dev, CI and production
environments (assuming you want to do a single server deploy). All of this
happens with the same command.
You can use Docker Compose v2 profiles and environment variables to control slight differences between them such as maybe running Postgres in development but using a managed database in production.
When using profiles you can easily only start your JS and CSS watchers in development but not production. The blog post I linked above goes into detail on that.
All in all Docker Compose is nice because it gives you a way to docker compose build --up
your project in all environments. I’ve been using it for years,
even so in production for single server deploys.
# Make It Easy to Run Your Rails Commands
Ruby is all about developer happiness and Rails has wonderful abstractions to make things easy to use by hiding complexity in all the right places.
When running Rails in Docker you’ll find yourself running long docker ...
or docker compose ...
commands to interact with your app.
Do yourself a favor and hide that complexity in a run script. This will let you run ./run rails g controller Posts
instead of docker compose exec web rails g controller Posts
.
It’s basically an aliases file but with more flexibility since it’s just a plain old shell script. I’ve been using this pattern for a long time now, it works out very nicely in the end.
Creating Abstractions to Easily Create Commands
This is a spot where you can set up your own abstractions to make it easier to create your own functions in the run script.
For example, let’s say you want to run ./run rails db:migrate
or ./run shell
to drop into your container’s shell environment.
These 2 functions in the example app are the basis of that:
function _dc {
docker compose "${DC}" ${TTY} "${@}"
}
function cmd {
# Run any command you want in the web container
_dc web "${@}"
}
Now you can define a general purpose rails
function so you can run ./run rails <anything you run with rails>
such as this:
function rails {
# Run any Rails commands
cmd rails "${@}"
}
And for the shell function it’s the same as rails
except you run cmd bash "${@}"
.
Setting a TTY or Not
One gotcha with Docker Compose is by default it will allocate a TTY. This is
handy if you plan to docker compose exec
into your container with an
interactive shell or Rails console. Basically something where you’re keeping a
prompt open that’s waiting for input.
The gotcha is inside of a CI environment you can’t allocate a TTY so the example app’s run script has a bit of logic in place to detect if you can allocate a TTY or not. This way your commands work the same on your dev box and CI and the logic is only defined once.
It boils down to using the --no-TTY
flag with docker compose exec
or not.
That _dc
function has a reference to ${TTY}
. That’s configured near the top
of the script:
TTY=""
if [[ ! -t 1 ]]; then
TTY="-T"
fi
That’s a way to detect if a TTY is available. If it’s not available then we
want to use Docker Compose’s -T
flag which is short for --no-TTY
.
# Esbuild / Tailwind Watchers in Development
The use case here is in development you’ll want the esbuild and tailwind
containers to run in --watch
mode which means if you change any JS or CSS
files they’ll get automatically rebuilt in 1 second or less.
In production neither container will actively run. Instead everything will get
bundled and minified during a rails assets:precompile
.
We can handle this use case pretty easily with the run
script pattern to
configure the watchers depending on which environment you’re in and Docker
Compose profiles to pick which containers to run.
Your package.json
can have this snippet:
"scripts": {
"build": "./run yarn:build",
"build:css": "./run yarn:build:css"
}
And in the run
script, both of those commands would be responsible for either
using --watch
or --minify
depending on which RAILS_ENV
you’re running in.
The example app has a fully working solution for both, but here’s the snippet for esbuild:
function yarn:build {
# Build JS assets, this is only meant to be referenced from your package.json
local args=()
if [ "${NODE_ENV:-}" == "production" ]; then
args=(--minify)
elif [ "${RAILS_ENV:-}" == "development" ]; then
args=(--sourcemap --watch)
fi
esbuild app/javascript/*.* --outdir=app/assets/builds --bundle "${args[@]}"
}
Then for controlling whether or not each watcher actively runs you can set this
.env
value export COMPOSE_PROFILES=postgres,redis,assets,web,worker,cable
in development and in production remove assets
from that list of services.
# Running Tests
On your dev box chances are you have RAILS_ENV=develpoment
set when you run
docker compose up
to start your project.
Rather than spin up a whole new dedicated container with docker compose run
to run our tests, it’s quite a bit faster to connect to the existing container.
That means when you plan to docker compose exec
into your container and run
your tests you’ll want to ensure RAILS_ENV=test
is set which is listed below
in the code example.
Technically rails test
sets that for you but that doesn’t account for the
yarn
commands which build your assets.
Without setting that env var you may get ActionView::Template::Error: The asset "application.css" is not present in the asset pipeline.
in CI because
you’re dealing with a fresh system where you have no assets built for your test
environment.
The example app pulls all of this together in the run script’s ./run test
function:
function test {
# Run your Rails tests, use `test -b` to first rebuild your JS and CSS
local run_build="${1:-}"
local test_command="rails test"
if [ "${run_build}" = "-b" ]; then
test_command="yarn build && yarn build:css && ${test_command}"
fi
_dc -e "RAILS_ENV=test" js bash -c "${test_command}"
}
In development you likely won’t need to rebuild your assets for tests but if
you need to, such as in CI the option is there by running ./run test -b
. By
the way, the example app has a ci:test
function which is configured to run in
GitHub Actions.
We also execute rails test
in the js
container because this has access to
be able to rebuild both our JS and CSS.
The js
and css
containers both use the same Docker image. The only
difference is which command gets run. It’s very similar to the pattern we have
for using the same image for web
and worker
. If you’ve checked out the
docker-compose.yml
file from the example app you know what I mean! This is
also covered later on in this post.
Reducing Log Spam in Your Tests
Since RAILS_ENV=development
was set when the container started, debug level
logging will be enabled by default in your tests. That produces a lot of
terminal output. Every database query will get output and mixed in with your
test’s output since the example app configures Rails to log to STDOUT for
Docker (more on this later).
Fortunately this is an easy fix. You can edit config/environments/test.rb
to
include the line below. It’s already in the example app:
# Reduce log spam.
config.log_level = :warn
Now when you run your tests you’ll see what you normally see which is hopefully a bunch of passing tests with no extra debug information.
# Lock Files and Dependencies
There’s an interesting situation we have with Docker and volume mounts in relation to separating out build time and run-time operations.
Without Docker, normally you would run bundle install
or yarn install
on
your dev box and this will install your dependencies and produce a lock file
such as a Gemfile.lock
or yarn.lock
. Then you can commit that lock file
to version control and be on your way.
You could also run a bundle update
to update your dependencies or bundle outdated
to see what’s available in terms of updates. Yarn has similar
commands too.
With Docker Things Are a Little Different
We need to build our dependencies into a Docker image. This happens at build
time when you run docker compose build
. Bind mounts aren’t available which
means your freshly minted lock file will be in the image and not the file
system where you built things from.
This means your lock file won’t get updated which means you won’t be able to commit it to version control. That’s really bad. Even with Docker, you still want to check in your lock file to ensure repeatable builds with the exact versions you expect.
We can get around this in a pretty friendly way. You can:
docker compose build
like usual which installs the new dependencies along with creating a new lock file in the imagedocker compose run web bundle install
which spawns a new container- This will finish in a few seconds since it’s already up to date
- This is a run-time operation which has bind mounts available which means your lock file will be written to your dev box so you can commit it to version control
docker compose down
which removes the containers that were created with the above run command
That’s a bit tedious to run though. I mean, it’s 3 commands and chances are you
already have your project running with docker compose up
.
If you’re using the example app, I’ve included a few run commands:
./run bundle:install
to install dependencies in yourGemfile
./run bundle:update
to update existing dependencies in yourGemfile
./run yarn:install
to install or update dependencies in yourpackage.json
./run bundle:outdated
and./run yarn:outdated
to check what’s available
The takeaway here is all you have to do is run one of those commands to install new dependencies and you never have to think about it again.
Dependencies Are Installed to a Non-volume Mounted Path
Personally I don’t like a lot of details leaking out from the Docker image back to my Docker host, such as my dev box.
By default bundler will install gems to /usr/local/bundle/gems/
which is fine
because that’s not volume mounted. They will never get reflected back to your
host.
Yarn will use a relative node_modules/
by default. Fortunately it’s really
easy to customize is. The example app includes a .yarnrc
file with
--modules-folder /node_modules
and now when you run yarn install
packages
will get installed there within the Docker image.
# Database Config
The official Postgres Docker image will expect you to set at least the
POSTGRES_USER
and POSTGRES_PASSWORD
env variables. The container will not
start without them being set.
That’s because the Postgres image will automatically create a DB for you named
after the POSTGRES_USER
with the password you set for POSTGRES_PASSWORD
.
You can also set POSTGRES_DB
if you want a different DB name than the user.
I like to reference the variables in a docker-compose.yml
file but use
variable interpolation to avoid hard coding any passwords in that file.
Instead, they’ll get set in a .env
file. I’ve written about this pattern in
Docker tip #93.
On the Rails side, within config/database.yml
I typically roll with:
default: &default
adapter: "postgresql"
encoding: "unicode"
database: "<%= ENV.fetch("POSTGRES_DB") { "hello" } %>"
username: "<%= ENV.fetch("POSTGRES_USER") { "hello" } %>"
password: "<%= ENV.fetch("POSTGRES_PASSWORD") { "password" } %>"
host: "<%= ENV.fetch("POSTGRES_HOST") { "postgres" } %>"
port: "<%= ENV.fetch("POSTGRES_PORT") { 5432 } %>"
# http://guides.rubyonrails.org/configuring.html#database-pooling
pool: "<%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>"
development:
<<: *default
database: <%= ENV.fetch("POSTGRES_DB") { "hello" } %>_development
test:
<<: *default
database: <%= ENV.fetch("POSTGRES_DB") { "hello" } %>_test
production:
<<: *default
database: <%= ENV.fetch("POSTGRES_DB") { "hello" } %>_production
Basically everything is set to read from environment variables and in
production if you plan to set a DATABASE_URL
, that’s great. Due to how Rails
loads this environment
variable
everything works in an intuitive way.
For example, if you set that env var Rails will parse the value and merge it into your database config file. Values in the env var will take precedence.
Things like this remind me why I like Rails so much. It feels like this path has been traveled many times by others. The road is paved with good intentions, AKA. it “just works”.
# Puma and Sidekiq Configs
Since we’re running Puma in Docker we need to bind to 0.0.0.0
instead of
127.0.0.1
because localhost would be in the context of the container. We want
to be able to connect from clients outside of the container, such as a web
browser on your dev box.
We can do that with this:
# Specify the bind host.
bind "tcp://0.0.0.0:#{ENV.fetch("PORT") { "8000" }}"
The example app has the full puma.rb config which dynamically sets threads and workers based on either env variable values or the CPU core count on your box. None of that is really Docker specific but I wanted to call that out.
The sidekiq.rb
initializer
has also been set up to connect to Redis through a single REDIS_URL
env var.
This isn’t specific to Docker but is note worthy.
# Web Console on Errors
Next up is the web console that comes up on any Rails page with an error, this
is thanks to the web-console
gem which is now a Rails
project.
It allows you to drop into a console straight from the error page in your
browser to help debug something. Typically you’d enable it only in development
mode given it lets you execute any code just like rails console
would.
At the time of making this post the config.web_console.allowed_ip
value
defaults to 127.0.0.1
but in a Dockerized world this needs to be set to
0.0.0.0
just like we did for Puma’s bind address.
Without this config/environments/development.rb
option being set the console
will not work:
config.web_console.allowed_ips = ["0.0.0.0/0"]
This is safe in development, If you didn’t want to allow all hosts to connect
in development you can delete this line export DOCKER_WEB_PORT_FORWARD=8000
from the .env
file because by default it only publishes that port in such a
way that only localhost can connect. In this case, that would be your dev box.
That in itself is enough to restrict all other hosts from connecting even if this config option is configured the way it is now.
The .env
var opens it up to all hosts in development so it’s easier to test
your app on multiple devices such as a tablet or mobile phone.
# Logging to STDOUT
Docker is set up to expect that your containers log to STDOUT instead of a log file. This is really nice because it means you can configure logging once at the Docker level (such as logging to journald) and also handle log rotation with whatever tool is receiving your logs.
The example app’s config/application.rb
is set up to do that with:
logger = ActiveSupport::Logger.new(STDOUT)
logger.formatter = config.log_formatter
config.logger = ActiveSupport::TaggedLogging.new(logger)
I know Rails has an env variable of RAILS_LOG_TO_STDOUT=1
that you can set
but in a default rails new
Rails project this is only configured to work in
production.rb
. We want this to apply in development as well, and
application.rb
is the spot to have configuration apply to multiple
environments and it also means that env variable isn’t necessary to set.
# .dockerignore
This file controls what’s copied into your image when you use COPY
instructions.
To avoid this post becoming out of date vs what’s in the example app’s repo, I’d suggest checking out the file there for a complete working example.
But the high level takeaway is to avoid copying in logs and build artifacts
such as what might be included in node_modules/
, public/assets/
and more.
It’s also very important not to include your config/master.key
or any .env*
files except for an .env.example
file which you know doesn’t include secrets.
You can inject environment variables into containers at run-time without
building them into your image.
Building them into your image is bad because if you push your image to a Docker registry then the Docker registry now has access to your secrets. That’s one more spot where secrets can be leaked.
# Dockerfile
We’re going to cover a lot of ground here!
The official Ruby base image with Debian Slim
Debian Slim is a variant of Debian that’s optimized for running in containers. It removes a ton of libraries and tools that’s normally included with Debian.
The example app produces a 472 MB Docker image using the Ruby image along with
Debian Slim. Building the same exact image without -slim
produces a 960 MB
image.
That’s a huge difference in the end! The example app even includes
build-essential
to make it easier to build most dependencies without having
to chase down most build dependencies. You could further optimize things by
only including the exact build tools you need for your list of gems but in
practice I’ve found build-essential
to be a nice balance between size and
convenience.
Overall the difference between Slim and non-Slim is massive because the
non-Slim version installs 100s of build dependencies of which your specific
application might only need 1 or 2 in which case you can opt-into using them by
including them as apt
packages.
The example is already set up to use Slim and this translated to changing FROM ruby:X.X.X-bullseye
to FROM ruby:X.X.X-slim-bullseye
. If you’re reading this
in the future, feel free to change bullseye
to the latest stable Debian
version.
I know Alpine is also an option but in my opinion it’s not worth it. Yes,
you’ll end up with a bit smaller image in the end but it comes at the cost of
using musl
instead of glibc
. That’s too much of a side topic for this post
but I’ve been burned in the past a few times when trying to switch to Alpine –
such as having network instability and run-time performance when connecting to
Postgres. I’m very happy sticking with Debian.
Using Multi-stage builds for your Node environment
I think import maps could very well be the future (if not already present day) but when using TailwindCSS and other front-end CSS libraries I’ve always found myself needing a Node environment.
I know there’s the standalone TailwindCSS binary that doesn’t need Node but in a large application that uses TailwindCSS there’s a reasonable chance you’ll end using certain Tailwind plugins that aren’t included which means you need Node.
Long story short, I haven’t fully escaped Node when using Tailwind and using multi-stage builds will let you shave off hundreds of MBs from your final Docker image.
In 2021 I gave a talk at DockerCon that covered this topic and more. The example app is already hooked up to work with multi-stage builds (an even more up to date version than described in that DockerCon talk).
COPY in your Gemfile and Gemfile.lock before your app
This is a general best practice with Docker to take advantage of layer caching. The TL;DR is you only need to re-build your dependencies if your dependencies change instead of having to re-build them for any app specific change.
For example, if you add a comment to your user.rb
file that shouldn’t tell
Docker to re-run a bundle install
. You only want to run a bundle install
if your Gemfile
or lock file changes.
That translates to something like this:
COPY --chown=ruby:ruby Gemfile* .
RUN bundle install
COPY . .
Using Gemfile* .
is beneficial because it’ll work with or without a
Gemfile.lock
existing. IMO it’s better than using Gemfile Gemfile.lock .
because this will fail if the lock file isn’t present.
Run Your Process as a Non-root User
This is also a general best practice but it’s important enough to call out here. I’ve written about this in the past in great detail.
Only rails assets:precompile
in non-development environments
Pre-compiling your assets can slow you down in development. Fortunately a single if condition can control when they get compiled, such as:
RUN if [ "${RAILS_ENV}" != "development" ]; then \
SECRET_KEY_BASE=dummyvalue rails assets:precompile; fi
Since you would rarely build your image with RAILS_ENV=test
this condition
will ensure your assets get built in production, staging or whatever non-dev
environments you have.
We’re using a dummy secret key because pre-compiling assets will invoke parts of
Rails that need it defined. There’s a patch in Rails
master at the time of writing this
post which lets you set SECRET_KEY_BASE_DUMMY=1
instead.
I’ll switch to using that and update this post when it’s available in a release since it’s a bit more explicit on what’s happening.
In either case this makes development a little more friendly.
# ENTRYPOINT Script
The ENTRYPOINT
script will execute every time you start your container, such
as when you do a docker compose up
or docker compose run web ...
.
Public Files and Volumes
My example app is set up to work in development and production. In production you might choose to have a single server deploy with nginx installed directly on your host which means you need to configure your Rails app to volume mount your public directory.
In your Dockerfile’s asset
stage you can pre-compile your assets and then in
your app
stage you can copy them in with COPY --chown=ruby:ruby --from=assets /app/public /public
.
What this means in the end is, at build time you’ll get a fresh set of assets
built into /public
(the leading /
is important here). No volumes are in use
at this point and it wouldn’t be expected that you volume mount this path.
Now at run-time when your container starts you can copy those from /public
to
/app/public
(which is volume mounted) and due to how Docker’s 2-way bind
mounts work this even supports saving uploaded files from your app into
Docker’s bind mounted path.
That path would be ./public:/app/public
and now you can configure your nginx
root path to wherever your ./public
directory is located on disk.
Long story short everything “just works”. nginx will see the new assets from your build and you can still persist user uploads to disk from your containerized Rails app if you choose to do that instead of uploading them to S3 or another object store.
All of this boils down to having a Docker ENTRYPOINT
script with:
#!/usr/bin/env bash
set -e
cp -r /public /app
exec "$@"
If you’re running in production at scale with Kubernetes or something else
where you’re not volume mounting assets to nginx on the same host you can wrap
that cp
command in an if condition to only apply to RAILS_ENV=development
.
Only Keep the Latest .sprockets-manifest-X.json
File
When pre-compiling your assets with Sprockets, it will produce a separate JSON file with an MD5 hash appended to the file name.
However, Sprockets isn’t configured to always choose the latest file that was generated, instead it picks the first file it finds.
This is a problem because if we have old JSON files lingering around from our volume it won’t always pick the latest file which means old assets will be served.
The ENTRYPOINT
script from the example app has this in it to handle the above:
manifest_files=/app/public/assets/.sprockets-manifest-*.json
if compgen -G "${manifest_files}" > /dev/null 2>&1; then
find \
${manifest_files} \
-type f ! -name "$(basename /public/assets/.sprockets-manifest-*.json)" \
-delete
fi
The high level overview here is if we have 1 or more manifest files then we’ll
delete all of them except for a specific manifest file. That specific file is
being referenced from /public/assets
which is the non-volume mounted file.
In the last section we covered that cp
command. The important bit here is the
manifest file in /public/assets
is the latest one because it was just built.
Everything else gets deleted.
We’re using compgen
because the find
command will throw an error if there’s
no manifest files around. This will be the case in development. It’s faster
than using ls
and [ -f ... ]
won’t work when you want to match a pattern of
files.
What about rails db:migrate
?
Personally I’m not a fan of running migrations in an ENTRYPOINT
script.
I think it’s best suited to run this separately as part of your deploy process because:
- For single server deploys, hopefully your deploy process is automated with a script, it doesn’t matter if your script runs a
docker compose up -d
ordocker compose up -d && ./run rails db:migrate
- You might be deploying to Kubernetes with Argo CD or Helm which have hooks to let you run a migration as a Kubernetes job before your app is rolled out
- When run as a separate job it makes gathering metrics about this easier
- If you have 10 replicas of your app you don’t need to worry about
db:migrate
being triggered 10 times- To be fair this command is idempotent, but…
- Having your migration get run as a Kubernetes job is nice since it only runs once and you avoid a replica x N rollout failure if your migration happens to fail
- It’s generally good from a separations of concerns point of view and to reduce risk – migrations are scary enough as it is!
- To be fair this command is idempotent, but…
With that said, that’s why I keep this separated out but I understand if you
want to deviate from that. At the time of writing this post, the official Rails
Docker set up has migrations being run in an ENTRYPOINT
.
I ended up adding a PR to
adjust its ENTRYPOINT
to use:
if [ "${*}" == "./bin/rails server" ]; then
./bin/rails db:prepare
fi
The idea here is the migration will only run when starting the Rails server not if you start the console, Sidekiq, action cable or any other process.
This command will take about 1 second to execute when no work needs to be done which means it will prevent your app from starting for +1 second. For single server deploys on a Rails app that takes ~5s to boot, 1 second is a lot because each second is downtime.
That’s also why I prefer migrating after your app is up for single server deploys. You just need to be mindful to release code in your app to handle the case for when the migration isn’t run yet if you care about cases where a user might load your app in the time period before the migration finishes.
It’s a good idea to be mindful of these scenarios because at scale you’ll be dealing with rolling updates where both the old and new version of your app need to work with the “new” version of the database.
Basically you need to account for both the old and new version of your app accessing the same database. That goes beyond the scope of this post but it’s something to think about.
# docker-compose.yml and .env Files
The DockerCon talk goes into all of the patterns in the example app’s compose file.
One takeaway is you can use the same Docker image to run multiple containers.
For example the web
and worker
both use the same main Rails app image,
except with a different command. Likewise the js
and css
containers use the
image built from the assets stage.
Another is to use variable interpolation to make things portable across dev, CI, prod or any other environments. This also ensures it’s always safe to commit this single file to version control since no secrets are hard coded.
That’s it! I hope this helped you get going with Rails and Docker. I’ve been running this type of set up now for years in development and production.
# Demo Video
Timestamps
- 0:10 – Rails 7.1 will generate a Dockerfile
- 1:52 – Docker Compose is important
- 6:01 – Controlling which containers run in different envs
- 8:51 – Using a run script to make it easy to run Docker commands
- 10:52 – Handling a lack of TTY with Docker Compose
- 12:26 – Running esbuild and Tailwind in development
- 13:59 – Running tests while setting RAILS_ENV=test
- 16:20 – Limiting log levels in tests
- 17:37 – Handling lock files and dependencies
- 21:15 – A couple of shortcuts for common dependency commands
- 22:31 – A custom node_modules directory
- 25:33 – Configuring your database
- 28:34 – Configuring Puma and Sidekiq
- 30:30 – Web console access
- 33:30 – The .dockerignore file
- 36:13 – Injecting env variables at container run-time
- 37:38 – The Dockerfile
- 37:59 – Using Debian slim for a smaller image
- 43:01 – Multi-stage builds
- 46:58 – Docker is smart when it pulls images
- 47:50 – Only installing gems when your Gemfile changes
- 49:57 – Running your containers as a non-root user
- 53:52 – Only pre-compiling assets in non-dev environments
- 55:57 – Copying pre-compiled assets in a volume friendly way
- 58:21 – Ensuring the latest sprockets manifest file is used
- 1:01:58 – Where should you run database migrations?
- 1:07:23 – Running multiple commands from the same Docker image
- 1:09:23 – Environment variable interpolation
- 1:09:48 – Dev / prod parity is important
- 1:10:57 – Questions?
What’s your best tips for running Rails in Docker? Let me know below!