---
title: Adding config to AWS ECS tasks
published: "2024-06-11"
publisher: Honeybadger
author: Benjamin Curtis
category: DevOps articles
tags:
  - DevOps
  - AWS
  - ecs
  - Fargate
  - Terraform
  - Vector
description: "If you want to run a Docker container in AWS ECS, but the image requires a configuration file to work properly, are you stuck creating a custom image just to add that file? Nope! Learn how to add that file at deployment time."
url: "https://www.honeybadger.io/blog/configure-docker-on-ecs/"
---

When deploying Docker containers to AWS ECS, you can encounter a situation where you want to run an image that requires some configuration. For example, let's say you wanted to run Vector[1](#fn:vector) as a [sidecar](https://learn.microsoft.com/en-us/azure/architecture/patterns/sidecar) to your main application so you can ship your application's metrics to a service like [Honeybadger Insights](https://www.honeybadger.io/tour/logging-observability/). To run Vector, you only need to provide one configuration file (`/etc/vector/vector.yaml`) to the image available on [Docker Hub](https://hub.docker.com/r/timberio/vector). However, creating your own image that just adds one file would be a hassle. It would be easier if you could pull the public image, add your config, and deploy that. But ECS doesn't allow you to mount a file when running the container like you can when running Docker on your laptop or a VM. There is a way to do it on ECS, though — let's check it out.

## Services and Tasks

But first, a little terminology. Running a Docker container on ECS requires you to create a [task definition](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html) that specifies what image(s) you want to run, what the command should be, what the environment variables are, etc. Continuing our example, a task definition that runs Vector looks like this:

```json
{ "containerDefinitions":[ { "name": "vector", "image": "timberio/vector:0.38.0-alpine", "essential": true, "environment": [] } ] }
```

Of course, this configuration won't do us much good as-is — it will run Vector, but there won't be any Vector configuration, so Vector won't be doing anything at all. We'll fix that in a bit. :)

An ECS [service](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html) runs your tasks (made up of one or more images) on your own EC2 instances or instances managed by AWS (known as [Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html)). We'll assume you're using Fargate for this tutorial. Each service definition specifies how many copies of the task definition you want to run (e.g., two or more for redundancy), what security group to use, the ports to forward to the containers, and so on. In other words, your task definition specifies the Docker-specific stuff like the image to use, and the service specifies how to run it in the AWS environment.

With that out of the way, we can return to the task at hand (pun intended).

## Configuring a container

You might have a container that's configured entirely by environment variables. If that's the case, then you can use the `environment` section of the task definition to handle that:

```json
"environment": [ { "name": "ENVIRONMENT", "value": "production" }, { "name": "LOG_LEVEL", "value": "info" } ]
```

But you have to do a bit more work to get a configuration file to show up. I'll drop a task definition on you, then walk through the key points.

```json
{ "containerDefinitions":[ { "name": "vector", "image": "timberio/vector:0.38.0-alpine", "mountPoints": [ { "sourceVolume": "vector-config", "containerPath": "/etc/vector" } ], "dependsOn": [ { "containerName": "vector-config", "condition": "COMPLETE" } ], }, { "name": "vector-config", "image": "bash", "essential": false, "command": [ "sh", "-c", "echo $VECTOR_CONFIG | base64 -d - | tee /etc/vector/vector.yaml" ], "environment": [ { "name": "VECTOR_CONFIG", "value": "Contents of a config file go here" } ], "mountPoints": [ { "sourceVolume": "vector-config", "containerPath": "/etc/vector" } ] } ] }
```

There are a few things to notice here:

- There are two containers instead of just one. This is how you run a sidecar (running an app container and a logging container side by side) or, in this case, bootstrapping one container with another one.
- Both containers share a mountpoint (`vector-config`) at the same location (`/etc/vector`). The `containerPath` doesn't have to be the same, but the `sourceVolume` does. This allows one container to write to a file and the other container to be able to read that same file.
- The `vector` container depends on the `vector-config` container and waits to boot until the `vector-config` container has run its `command`.
- The `command` for the `vector-config` container populates a configuration file with the contents of an environment variable called `VECTOR_CONFIG`.

That's the bones of getting a file mounted for the Docker container. An initializer container creates the file on a shared volume; then, another container can read the file. But how do we get the contents of our config file into that environment variable, and what's with the `base64 -d -` thing?

## Terraform it

[Terraform](https://www.terraform.io) is a handy tool for automating the deployment of cloud infrastructure. It works with all kinds of clouds and is great for documenting and tracking your infrastructure changes. For this tutorial, we'll focus on just one Terraform resource — the one that can create our task definition and populate the configuration:

```plaintext
resource "aws_ecs_task_definition" "vector" { family                   = "vector" network_mode             = "awsvpc" requires_compatibilities = ["FARGATE"] cpu                      = "256" memory                   = "512" volume { name = "vector-config" } container_definitions = jsonencode([ { name      = "vector" image     = "timberio/vector:0.38.0-alpine" essential = true mountPoints = [ { sourceVolume  = "vector-config" containerPath = "/etc/vector" } ], dependsOn = [ { containerName = "vector-config" condition     = "COMPLETE" } ] }, { name      = "vector-config" image     = "bash" essential = false command = [ "sh", "-c", "echo $VECTOR_CONFIG | base64 -d - | tee /etc/vector/vector.yaml" ], environment = [ { name  = "VECTOR_CONFIG" value = base64encode(file("vector.yaml")) } ], mountPoints = [ { sourceVolume  = "vector-config" containerPath = "/etc/vector" } ], } ]) }
```

That looks pretty familiar, right? Terraform does a good job of sticking closely to the formats used by the various cloud providers. In this case, the `aws_ecs_task_definition` resource looks like the JSON used in task definitions. Note how the `VECTOR_CONFIG` environment variable is populated. Terraform provides `file` and `base64encode` helpers to read a file's contents and encode it, respectively[2](#fn:base64).

Our actual Vector config (that ends up at `/etc/vector/vector.yaml`) is stored in a file next to our Terraform config. It could look something like this:

```yaml
sources: app_metrics: type: prometheus_scrape endpoints: - http://localhost:9090/metrics sinks: honeybadger_insights: type: "http" inputs: ["app_metrics"] uri: "https://api.honeybadger.io/v1/events" request: headers: X-API-Key: "hbp_123" encoding: codec: "json" framing: method: "newline_delimited"
```

Diving into how Vector works could be a whole 'nother blog post, but here's a quick run-down on what we're configuring our Vector sidecar to do. We first define a source, or in other words, something that emits some data for Vector to process. Vector supports many sources, like S3 buckets, Kafka topics, etc. We're telling Vector to scrape [Prometheus](https://prometheus.io/) metrics served by our application on port 9090[3](#fn:prometheus). The sink configuration sends data from Vector to someplace else — in this case, to [Honeybadger Insights](https://www.honeybadger.io/tour/logging-observability/).

## That's a wrap

So, that's how you can deploy a Docker image to AWS ECS with a custom configuration _without_ having to build and host a custom image. All it takes is a little bit of Terraform!

* * *

1. [Vector](https://www.vector.dev/) is an open-source, high-performance observability data platform for collecting, transforming, and shipping logs, metrics, and traces from various sources to a wide array of destinations. [↩](#fnref:vector)
    
2. Using Base64 encoding via the `base64encode` Terraform helper and decoding via the `base64 -d -` command allows us to avoid problems with quotes and other characters breaking the task definition's JSON configuration. [↩](#fnref:base64)
    
3. For example, you can use a [Prometheus exporter](https://github.com/discourse/prometheus_exporter) in your Rails app to get metrics that look like [this](https://gist.github.com/SamSaffron/e2e0c404ff0bacf5fbca80163b54f0a4) to be served on port 9090. [↩](#fnref:prometheus)

---

## Try Honeybadger for FREE

Intelligent logging, error tracking, and Just Enough APM™ in one dev-friendly platform. Find and fix problems before users notice.

[Start free trial](https://app.honeybadger.io/users/sign_up)

[See plans and pricing](https://www.honeybadger.io/plans/)
