Containerizing software is packaging it into standardized units for ease of development and deployment. Containers bundle together code from your application, along with all of its dependencies. A container can stand alone entirely; it contains a package with your software, a runtime environment, and system libraries. Containers help developers and operations teams ensure that software runs the same, regardless of its environment. By separating code from infrastructure, apps that are "containerized" run the same in a local environment, a test environment, and production.
Docker is one of the most popular platforms for developing and deploying software. Docker packages software as an "image", which is turned into a container at runtime when executed on the Docker Image. The isolation allows developers to run many containers on a single host at the same time.
Rails developers face a unique set of challenges when containerizing an existing application. This article will provide a walkthrough of containerizing a functional Rails app and explain the important concepts and pitfalls along the way. This article is not a basic description of containers or Docker; instead, it is an explanation of problems developers face when containerizing production applications.
If you're following along, then you'll obviously need a Rails application that isn't already dockerized (that's the docker-specific term for 'containerized'). I'll be using RailsWork, a fully featured side project that I just launched. It's a job-board written with Rails and deployed to Heroku, but it isn't containerized.
Beyond that, you'll also need to have Docker installed. A popular way to install it is with Docker Desktop, which can be downloaded via the official website.
Once the app is downloaded, run the installer. After it runs, it will prompt you to drag the application to your applications folder. You'll then have to launch the app from your applications folder and grant it the privileged permissions it asks for. As a last check to ensure Docker is installed properly, try to list the containers running on your machine from your terminal by running the following:
If Docker is installed (and you're not running any containers), you'll get an empty list with just headers that look like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
It's important to start off with clear terminology before we dive in too deep.
After your Rails application is "Dockerized", it will run in a container. A container stands alone, is replaceable, and is often rebuilt.
A container is built from an image. An image is a virtual snapshot of a file system paired with metadata.
A Dockerfile is source code that describes how an image should be created. Dockerfiles are often included in a Dockerized app's repository and tracked in version control along with the rest of an app.
Creating a Dockerfile is easier than it sounds! Docker gives us special syntax that abstracts away the hard work of containerizing something. First, make your way to the root directory of the app you want to containerize. Now that you're ready to start working, it's a good idea to create a new branch if you're using git. You can easily create a new branch with the name
dockerize-this-app by running the following:
git checkout -b dockerize-this-app
Next, create a Dockerfile and direct it to build an image based on a Ruby application. This can be done from the command line by running the following:
echo "FROM ruby:3.0.0" > Dockerfile
Here, we're just creating Dockerfile and adding a line that specifies where to find a Ruby container image. My project uses Ruby 3.0.0, so I used the appropriate image. If you're on a different version of Ruby, it's no problem. Docker has a list of all the supported images.
Next, manually instruct Docker to create a Docker image:
docker build -t rails_work .
Here, you can replace
rails_work with any name you would like for the image. Also, be sure to include the period at the end!
If you want to see that the image has been created, you can list images on your system with the following:
docker image list
This image is mostly empty, though; it doesn't currently contain our application. We can instruct it to add the code from our app by adding the following to the Dockerfile to the end:
ADD . /rails_work WORKDIR /rails_work RUN bundle install
This copies over the files from your application and installs the application's dependencies. (Here, you would replace
rails_work with the name of your app.)
At this point, you should rerun the command to create the image:
docker image list
There's a possibility for an issue here, especially if you're doing this to an existing production application. Bundler may complain that the version of Bundler the image is attempting to use is different from the one that created the
Gemfile.lock file. If this happens, you have two clear options:
- Change the version that the image is using.
Gemfile.lockentirely. **If you do this, make sure to pin any versions of Gems that you need at specific versions, as the lockfile will be regenerated entirely.
If your bundle install still fails, then you may need some extra installation in your Dockerfile:
RUN apt-get update && apt-get install -y shared-mime-info
If you're still experiencing issues, you may have chosen the wrong Ruby image to base off of, so it's worth starting investigations there.
Here is a good opportunity to set environment variables:
ENV RAILS_ENV production ENV RAILS_SERVE_STATIC_FILES true
Next, add a line to expose port 3000, which is where Rails runs by default:
Lastly, instruct the container to open a bash shell when it starts:
Altogether, your Dockerfile should look like this (with the rails_work name substituted out):
FROM ruby:3.0.0 ADD . /rails_work WORKDIR /rails_work RUN bundle install ENV RAILS_ENV production ENV RAILS_SERVE_STATIC_FILES true EXPOSE 3000 CMD ["bash"]
Docker Commands Explained
It would certainly help to have a full understanding of some of the most common Dockerfile commands.
- FROM -> This defines what image to base off of.
- RUN -> This executes commands inside the container.
- ENV -> This defines environment variables.
- WORKDIR -> This changes the directory that the container is using.
- CMD -> Specifies what program to run when the container starts.
According to Docker's documentation, "Compose" is their tool for creating (and starting) applications with multiple Docker containers. Everything needed to spin up the application's necessary containers gets outlined in YAML. When someone runs
docker-compose up, the containers are created! Docker-compose lets us declaratively describe our container configuration.
Before creating your Docker Compose file, it's important to indicate to Docker what files should be excluded from the image that gets built. Create a file called
.dockerignore. (Note the period!) In this file, paste the following:
.git .dockerignore .env
If your Gemfile is maintained by the build process, then be sure to add
Gemfile.lock to the ignores above.
Next, create a file called
docker-compose.yml. This is where we'll describe our container configuration. We'll start off with this for the contents of the file:
version: '3.8' services: db: image: postgres environment: POSTGRES_USER: postgres POSTGRES_PASSWORD: password volumes: - postgres:/var/lib/postgresql/data web: build: . command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'" volumes: - .:/Rails-Docker ports: - "3000:3000" depends_on: - db volumes: postgres:
This file creates two services: one called
db and the other called
web. The db container will be built from a premade image intended for postgres, and you should substitute out the relevant values for
POSTGRES_PASSWORD. You should be careful not to put production secrets in this file - see the "Managing Secrets" section below for more information on that.
The web container is built from our Dockerfile and then starts a Rails server on port 3000 at IP address 0.0.0.0. The internal port 3000 is then mapped to the actual port 3000.
And lastly, we have a Postgres volume to persist data for us.
Authenticating at build-time can be an issue for production applications. Perhaps your application seeks Gems from a private repository, or you just need to store database credentials.
Any information that is directly in the Dockerfile is forever baked into the container image, and this is a common security pitfall.
If you're using Rails' credentials manager, then giving Docker (or any host for that matter) access is relatively trivial. In the Docker Compose file, you simply provide the
RAILS_MASTER_KEY environment variable. For the given compose target, you specify the key under an
environment header, which you need to create if you haven't already. The docker-compose file from above would then become the following:
version: '3.8' services: db: image: postgres environment: POSTGRES_USER: postgres POSTGRES_PASSWORD: password volumes: - postgres:/var/lib/postgresql/data web: build: . command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'" volumes: - .:/Rails-Docker ports: - "3000:3000" depends_on: - db environment: - RAILS_MASTER_KEY=this_would_be_the_key volumes: postgres:
Now, this leaves you at a crossroads. You likely want to have this file committed to source control, but you definitely don't want your master key or even your database password tracked by source control, as this would be another dangerous security issue. The best solution so far would be to utilize the dotenv gem so that you can access these credentials by proxy, storing them in a different file that isn't tracked by source control.
Running the Dockerized Application
Finally, you can run the dockerized application with the following command:
docker compose up
Believe it or not, that's it! Docker-compose makes spinning up a container easy, especially when it comes to command-line arguments.
If you want a list of running containers, simply run the following:
If your Rails container name is
web, you can execute commands on it in a rather straightforward way. For example, if you wanted to run a Rails console, all you'd need to do is run the following:
docker exec -it web rails console
If you just want a bash shell inside the container, you'd instead run the following:
docker exec -it web bash
Some More Pitfalls Like Those Listed Here
One common issue with dockerized Rails applications in production is dealing with logs. They shouldn't be in the container system long-term. Docker suggests that logs simply be redirected to STDOUT. This can be explicitly configured in
Another common issue is that of mailers. If your application uses mailers, you must explicitly define the connection settings. SMTP is a perfectly fine delivery method and usually works well with defaults, but we must be careful to set the server location and other settings to match our container configuration.
If you have workers or background jobs, such as sidekiq, then you must run it in its own container.
Containerizing a production Rails application comes with a set of challenges, as you no doubt have seen. As your application has grown, it likely has accumulated a number of dependencies that make a migration like this challenging. Whether it's background workers, mailers, or secrets, there are established patterns to handle most pitfalls. Once the initial work of getting a production application working with Docker is complete, the ease of future changes and deploys will make the investment worthwhile.