How to Manage Application Secrets in EC2

Are you deploying your apps to EC2 and wondering how to store your application secrets? Learn how to use KMS and IAM roles to store your secrets on S3 securely.

When you deploy your application to EC2, it's a good idea to use autoscaling groups. Autoscaling groups allow your application to scale up and down to meet demand, or to recover from failed instances, all without any manual intervention. To make them work, though, make sure that every instance is fully ready to service live traffic after it finishes booting. That requires a bit more effort than just deploying your app to a server, if you're used to making some changes on a new server or doing an initial deployment via capistrano before it's ready to go.

For example, we have a user-facing web application built using Rails. When it's done booting, each instance needs to have that Rails app ready and waiting to respond to requests forwarded by the load balancer. To make this happen, I first created a custom AMI by taking a snapshot of an instance provisioned via Ansible with the app, nginx, etc. I then configured the autoscaling group with userdata that invokes a script that copies what capistrano does for a deployment — it pulls the latest code from git, runs bundler, and so on. So with the app and all its dependencies in place, what's left?

Application Secrets: The Deployer's Bane

Application secrets present a challenge: you want to keep them from being stored where they can be exposed (like, say, in your git repo), but you need them to be available to your app when it's running. And due to autoscaling, you can't rely on a human to be there to put them in place for your app when it needs them.

One answer to this problem is Vault by Hashicorp. It is a fantastic piece of software written specifically to solve this problem of keeping your secrets secret until your app needs them. However, the downside is that you have to provision and manage Vault — it's yet another service that you need to keep running.

Another option is to save the secrets in shared storage (S3, naturally), and ensure that only your instances have access to that bucket and/or key. This can be done using IAM roles, which can have policies added to them that grant access to the restricted S3 resources. That still leaves you open to unwanted exposure, though, if you store all those secrets in plaintext on S3. It is possible to accidentally make that data available to others with access to that bucket or even the whole world.

Wouldn't it be nice if you could encrypt your secrets before saving them on S3, and then load and decrypt them in the app when you need them?

The Secret Ingredient: Amazon's Key Management Service

Amazon's Key Management Service (KMS) provides an API for interacting with encryption keys. When combined with IAM roles and the Aws::S3::Encryption module, it only takes a few lines of code to load your secrets into your application while keeping them encrypted on S3.

Before I dig in, I have to thank Don Mills, who wrote a fantastic post on using KMS plus S3 for storing secrets. I altered his approach a bit by depending on IAM roles and keeping track of the KMS key id separately rather than storing the key info along with the secrets on S3.

KMS generates and provides access to a master encryption key that you can use to encrypt and decrypt data. When you ask it to encrypt something, KMS hands you a temporary key based on the master key, and that temporary key can be used for encryption or decryption.

To generate a key, you go to the IAM console and choose the Encryption Keys link. When you create a key, you'll be asked to specify IAM users or roles that will have the ability to use this key. Select the role that will be assigned to the EC2 instances that will be a part of the autoscaling group. Note the ARN of key -- you'll be using that later.

Making the Roux: Equal parts KMS and IAM

Once you've created the key, use the IAM console to edit the IAM role that you selected. Grant access to the bucket where the secret will be stored by attaching a policy like this one:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1476277816000",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:PutObjectAcl",
                "s3:HeadObject"
            ],
            "Resource": [
                "arn:aws:s3:::yourbucket/secrets.yml"
            ]
        }
    ]
}

With the key configured and the policy attached to the role, you can now interact with KMS and S3 via an Aws::S3::Encryption::Client instance. Here's some sample code that retrieves the secrets file and loads its contents into environment variables:

begin
  es3 = Aws::S3::Encryption::Client.new(kms_key_id: ENV['KMS_KEY_ID'])
  YAML.load(es3.get_object(bucket: "yourbucket", key: "secrets.yml").body.read).each do |k, v|
    ENV[k] ||= v # Don't override local ENV settings
  end
rescue ArgumentError
  # Raised when no KMS_KEY_ID was found in ENV, so there's nothing to do
rescue Aws::S3::Errors::NoSuchKey
  # No secrets file was found, so there's nothing to do
end

First we instantiate a new object with the ID of the KMS key. The ARN for the key (displayed in the IAM console when you created the key) is stored in the KMS_KEY_ID environment variable. When you pass in the key ID to the constructor here, it will handle the fetching of temporary decryption keys for you. You could specify an Aws::S3::Client instance as an option here, if you wanted to use a separate set of credentials to talk to S3 than you are using to talk to KMS. If you set up the IAM role as before, though, you don't need to, as Aws::S3::Encryption::Client will create a new Aws::S3::Client instance for you with the credentials provided by the IAM role.

With the encrypted S3 client ready, use #get_object to fetch the data from S3 and decrypt it using the key provided by KMS. Once you have the data, you can do what you want with it. Our data is YAML, so we load it and stuff the key/value pairs into ENV for the application code to use.

Drop this code into an initializer file in your Rails application, and you are good to go. Well, once you have your secrets stored on S3, that is. :) Assuming you have an IRB console on an instance running with the right IAM role, you can do something like this to store your secrets:

# Encrypt the data from /path/to/secrets.yml and store it on S3
Aws::S3::Encryption::Client.new(kms_key_id: ENV['KMS_KEY_ID']).
  put_object(bucket: "yourbucket", key: "secrets.yml", body: File.read("/path/to/secrets.yml"))

Serve Immediately

Now you have your secrets always available for any new instance that gets added to your autoscaling group while keeping the secrets encrypted. Everybody wins! :)

What to do next:
  1. Try Honeybadger for FREE
    Honeybadger helps you find and fix errors before your users can even report them. Get set up in minutes and check monitoring off your to-do list.
    Start free trial
    Easy 5-minute setup — No credit card required
  2. Get the Honeybadger newsletter
    Each month we share news, best practices, and stories from the DevOps & monitoring community—exclusively for developers like you.
    author photo

    Benjamin Curtis

    Ben has been developing web apps and building startups since '99, and fell in love with Ruby and Rails in 2005. Before co-founding Honeybadger, he launched a couple of his own startups: Catch the Best, to help companies manage the hiring process, and RailsKits, to help Rails developers get a jump start on their projects. Ben's role at Honeybadger ranges from bare-metal to front-end... he keeps the server lights blinking happily, builds a lot of the back-end Rails code, and dips his toes into the front-end code from time to time. When he's not working, Ben likes to hang out with his wife and kids, ride his road bike, and of course hack on open source projects. :)

    More articles by Benjamin Curtis
    Stop wasting time manually checking logs for errors!

    Try the only application health monitoring tool that allows you to track application errors, uptime, and cron jobs in one simple platform.

    • Know when critical errors occur, and which customers are affected.
    • Respond instantly when your systems go down.
    • Improve the health of your systems over time.
    • Fix problems before your customers can report them!

    As developers ourselves, we hated wasting time tracking down errors—so we built the system we always wanted.

    Honeybadger tracks everything you need and nothing you don't, creating one simple solution to keep your application running and error free so you can do what you do best—release new code. Try it free and see for yourself.

    Start free trial
    Simple 5-minute setup — No credit card required

    Learn more

    "We've looked at a lot of error management systems. Honeybadger is head and shoulders above the rest and somehow gets better with every new release."
    — Michael Smith, Cofounder & CTO of YvesBlue

    Honeybadger is trusted by top companies like:

    “Everyone is in love with Honeybadger ... the UI is spot on.”
    Molly Struve, Sr. Site Reliability Engineer, Netflix
    Start free trial
    Are you using Sentry, Rollbar, Bugsnag, or Airbrake for your monitoring? Honeybadger includes error tracking with a whole suite of amazing monitoring tools — all for probably less than you're paying now. Discover why so many companies are switching to Honeybadger here.
    Start free trial