Etcd 2.1.1 Encryption and Authentication

CoreOS etcd2 encryption

New to etcd 2.1.0 is the ability to use authentication to secure your etcd resources.  Encryption and authentication are relatively new additions so I thought I would write a quick blog post to help remember how to get these components up and running as well as help others because some of the ideas were a little confusing to me at first.

I pieced together most of the information for this post together from a few different sources.

The first were a pair of great tutorials (1, 2) for getting etcd encryption up and going.  The second resource used was the etcd-ca project by CoreOS for creating a CA and issuing certs, there are other ways of doing it but this was a straight forward method.  The third resource I recommend look at is the Security page in the CoreOS docs that shows examples of how to piece all of the commands and certs together.  The last resource readers might find useful is the etcd2 docs for the different flags and configuration options.  This resource was helpful for finding out all the various options that I needed to enable to get etcd2 working properly.


To use the authentication feature you will need to have etcd 2.1.0 or greater, which means you will need to be running a version of CoreOS that has the correct binary, which means you will either need CoreOS v752.1.0 or above, OR the correct binary version/Docker image.

Authentication is still an “experimental” feature so it may change at any time, therefore I have decided not to get in to any of the details of how it works.  If you are interested you can check out the docs on users and auth.

Running the CA server

At first I was conernced about running a CA server because I’ve had painful experiences in the past with CA’s but the etcd-ca tool makes this process easy and straight forward.  There are a few other CA resources in the etcd2 encryption docs but I won’t cover them here.

The easiest way to use the etcd-ca tool is to run it in a Docker container and write the certs out to the host via a shared voulme.  The following steps will pull the repo and build the binary for running the tool.

docker pull golang
docker run -i -t $(pwd):/go golang /bin/bash
git clone
cd etcd-ca
cd ./bin

Create the certs

After the etcd-ca binary has been built we can start creating certs.  The first thing necessary is to create the CA certs which will be used to sign all other certs.

./etcd-ca init

After creating the CA signing cert we will create a certificate for the etcd server that will be authenticating to.

./etcd-ca new-cert -ip <etcd_server_ip> <hostname>
./etcd-ca sign <hostname>
./etcd-ca chain <hostname>
./etcd-ca export --insecure <hostname> | tar xvf -

Replace <etcd_server_ip> with the public address of the etcd server and <hostname> with the hostname of the etcd server.  In this example, something like core01 would be a good name.

Optional – Client cert

This is not necessary in all scenarios for setting up encryption for etcd but if you are interested in having clients authenticate with their own cert it isn’t that much effort to add.

./etcd-ca new-cert -ip <etcd_server_ip> client
./etcd-ca sign client
./etcd-ca export --insecure client | tar xvf -

Note:  You may need to move the above keys from the server/clientkey files generated to the correct filename.  Also to note, if you screw up any of the certs or for any reason need to recreate them you can simply delete the certificates from the .etcd-ca/ hidden folder that contains all of the certificates.

Etcd cloud-config

The following cloud-config will configure etcd2 to use the certs we configured above.

There is currently an issue parsing a few of the etcd2 command line flags so the workaround (for now) is to split the configuration up in to a base config and then to add env vars as a a drop in.

  - path: /etc/systemd/system/etcd2.service.d/30-configuration.conf
  permissions: '0644'
  content: |
  # General settings
  # Encrytpion
  - path: /home/core/ca.crt
  permissions: '0644'
  content: |
  ca cert content

  - path: /home/core/server.crt
  permissions: '0644'
  content: |
  server cert content

  - path: /home/core/server.key
  permissions: '0644'
  content: |
  server key content

  - path: /home/core/client.crt
  permissions: '0644'
  content: |
  client cert content

  - path: /home/core/client.key
  permissions: '0644'
  content: |
  client key content

    name: etcd
    advertise-client-urls: https://$public_ipv4:2379
    initial-advertise-peer-urls: https://$private_ipv4:2380
    listen-peer-urls: https://$private_ipv4:2380
    - name: etcd2.service
    command: start

If you don’t want to bootstrap a node with cloud-config and instead are just interested in testing out testing encryption on an existing how you can use the following commands.  You will still need to make sure you follow the steps above to generate all of the necessary certs!

Manually start etcd2 with server certificate:

etcd2 -name infra0 -data-dir infra0 \ -cert-file=/home/core/server.crt -key-file=/home/core/server.key \ -advertise-client-urls=https://<server_ip>:2379 -listen-client-urls=https://<server_ip>:2379

and to test the connection use the following curl command.

curl --cacert /home/core/ca.crt -XPUT -d value=bar -v

Manually start etcd2 with client certificate:

Etcd2 -name infra0 -data-dir infra0 \ -client-cert-auth -trusted-ca-file=/home/core/ca.crt -cert-file=/home/core/server.crt -key-file=/home/core/server.key \ -advertise-client-urls https://<server_ip>:2379 -listen-client-urls https://<server_ip>:2379

Similar to the above command you will just need to add the client certs to authenticate.

curl --cacert /home/core/ca.crt --cert /home/core/client.crt --key /home/core/client.key \ -L https://<server_ip>:2379/v2/keys/foo -XPUT -d value=bar -v

Another way to test the certs out is by using the etcdctl tool by addding a few flags.

etcdctl --ca-file ca.crt --cert-file client.crt --key-file client.key --peers https://<server_ip>:2379 set /foo bar

etcdctl --ca-file ca.crt --cert-file client.crt --key-file client.key --peers get /foo

Encrypting etcd was a confusing process to me at first due to the complexity of encryption but after working through the above examples, most of the process made sense.  I seem to have a hard time wrapping my head around all of the different parts so hopefully I have effectively showed how the encryption component works.

The etcd-ca tool is very nice for testing because it is simple and straightforward but lacks a few features of a full fledged CA.  I suggest looking at using something like Openssl for a production type scenario.  Especially if things like certificate revocations are important.

About the Author: Josh Reichardt

Josh is the creator of this blog, a system administrator and a contributor to other technology communities such as /r/sysadmin and Ops School. You can also find him on Twitter and Facebook.

Test Kitchen style testing for Salt

If you are already familiar with Test Kitchen then a lot of this guide should be straight forward.  ChefDK has most of the needed tools bundled up for you already, I recommend installing ChefDK and then extending it to work with Salt.

In addition to the Test Kitchen install dependencies, you will need to install the following (additional) gems in order to get Test Kitchen working with Salt:

  • kitchen-vagrant
  • kitchen-salt

Then create a “.kitchen.yml” file in your /srv/salt directory.  This file tells Test Kitchen how to load in its configuration so it can test out your Salt configurations.

Here is a sample of what your .kitchen.yml file might look like.

  name: vagrant

  name: salt_solo
  is_file_root: True
    base.sls: /srv/pillar/base.sls
      - base

  - name: ubuntu-14.04

  - name: default

There is a good reference that describes the various options in the kitchen-salt docs.

I had to play around with this config to get things working correctly so you may need to make your own adjustments.  The key components are described in the “provisioner” section.  “is_file_root” is important because it tells the minion where to look for its configuration, it essentially says look at the top.sls file on the server that runs Test Kitchen.

Use “pillars-from-files” to manually add in any custom pillar data you have.  I had issues getting the default configuration to automatically add in pillar data so used this approach as a workaround.

Another caveat to mention here is that in order to get this method working I had to break the best practice of storing external Salt formulas in /srv/formulas and instead copy them directly in to the “root” diretory of /srv/salt.  So basically all of the logic and formulas will live in this base location.  If this point isn’t clear let me know and I can post more details.

Vagrant style testing

The next best alternative I have found to using the Salt driver for Test Kitchen is manually spinning up a customized vagrant box to test communication with the salt master or alternatively connecting via salt-ssh to run.

This method is a great compliment if you aren’t interested in running Salt in local mode and instead learning about and testing the salt-master and/or salt-ssh.  This method is also straight forward.

Here is what the custom config looks like for Vagrant.

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure(2) do |config|

 # OS config = "ubuntu/trusty64"
 config.vm.hostname = "salt-minion" :private_network, ip: ""

 # Copy Salt master files for masterless provisioning
 config.vm.synced_folder "/srv/salt/", "/srv/salt/"

 # Install/config Salt
 config.vm.provision :salt do |salt|
 salt.minion_config = "/etc/salt/minion"
 salt.run_highstate = false
 install_type = "daily"
 colorize = true

 # For remote master preseeding
 salt.minion_key = "salt-minion.pem"
 salt.minion_pub = ""

 # Debugging
 #salt.bootstrap_options = "-D"
 #salt.verbose = true


 # Additional configuration
 config.vm.provision "shell", inline: "echo ' salt' >> /etc/hosts"
 config.vm.provision "shell", inline: "apt-get install salt-ssh"


This config will do a few different things:

  • Configure a static address to make some testing easier
  • Dummy a host entry for your salt master
  • Bootstrap the salt installation
  • Copy over a centrally managed minion file (if you want to customize how the minion behaves)
  • Install salt-ssh if you want to play around with ssh functionality

Note:  To use salt-ssh you will need to create and entry in /etc/salt/roster for the Vagrant machine and set up credentials to connect.  All of the configuration options can be found in the Vagrant docs.  Obviously much more can be done in Vagrant but you will have to test all of the various options yourself to see what suits your needs.

To check current Salt keys, run the following commands on the master.  This should not return anything yet since we haven’t created the keys.

sudo salt-key -L

So with this configuration we are generating a key once and reusing it so we only need to accept the key once from the Salt master.  To generate the keys needed run the following command from the root vagrant directory.

sudo salt-key --gen-keys=salt-minion

Then to add the new entry on the Master (after bringing up the Vagrant box!):

sudo salt-key -a 'salt-minion'

Once this set of keys has been accepted, we can bring the minion VM up and down without having to worry about adding and deleting keys every time you need to test something.  Obviously this approach should not be taken outside of testing environments in to production.

Lastly, use this command to delete an old minion:

sudo salt-key -d salt-minion


Being new to Salt I found the combination of using the custom Vagrant box coupled with the Test Kitchen provisioner a great way to learn and also how to test Salt configurations.  The best part about using this method is that there is no additional work to getting the two methods to work together.  For example, after you have your directory structure set up correctly on the host system (master confg) then you will already have everything ready to go for the Test Kitchen as well as the Vagrant box method of testing.

I have found the combination to be very useful in my own learning so far of Salt.  Obviously this wont’ address all of the complexity of a deployment but is a great and easy way to get introduced to many of the concepts and ideas of Salt.

I am really enjoying Salt so far and I hope that readers can put some of my findings to help with their learning as well.

About the Author: Josh Reichardt

Josh is the creator of this blog, a system administrator and a contributor to other technology communities such as /r/sysadmin and Ops School. You can also find him on Twitter and Facebook.

Quicktip: Manage Memory Usage with Supervisord

I have been using Supervisord for process management for quite a while now but had no idea it could manage memory usage (among other things) until just recently.

There is a Python project called Superlance which essentially adds some extra functionality to supervisord for managing processes and memory.  The docs are a little thin so I thought it would be a good idea to highlight some of the functionality for folks that just want a few examples of how it works or can be used in a useful way.

Obviously you will want to have supervisor installed and configured already.  That can be done with pip or via apt-get.  You will also need to make sure you have a proper [unix_http_server] section in your /etc/supervisor/supervisord.conf file.

To install Superlance (on Ubuntu 14.04).

sudo pip install superlance

This will download and install a handful of Python scripts that can then be plugged in to Supervisor.  Check the link above if you are interested in the other plugins.

Then you will need to add a section to your supervisor config for memmon to manage memory usgae.

command=memmon -p <program_name>=3GB

The “-p <program_name>” corresponds to the program header in your supervisor configuration.  There are other options available to manage group processes, etc. for more advanced use cases but this should cover most basic scenarios.

You will need to reload the supervisor configuration after your changes have been made.  Unforunately the supervisor process needs to be fully reloaded.

sudo supervisorctl reload

If you want to check that the the memmon script is available before restarting supervisor you can use reread.

sudo supervisorctl reread

I would suggest reading through the Superlance docs and checking out the other scripts.  This additional functionality really helps add another layer of functionality to supervisord that I didn’t know existed.

About the Author: Josh Reichardt

Josh is the creator of this blog, a system administrator and a contributor to other technology communities such as /r/sysadmin and Ops School. You can also find him on Twitter and Facebook.

Change CoreOS default toolbox

This is a little trick that allows you to override the default base OS in the CoreOS “toolbox“.  The toolbox is a neat trick to allow you to debug and troubleshoot issues inside containers on CoreOS without having to do any outside work of setting up a container.

The default toolbox OS defaults to Fedora, which we’re going to change to Ubuntu.  There is a custom configuration file that will get read in via the .toolboxrc file, located at /home/core/.toolboxrc by default.  To keep things simple we will only be changing the few pieces of the config to get the toolbox to behave how we want.  More can be changed but we don’t really need to override anything else.


That’s pretty cool, but what if we want to have this config file be in place for all servers?  We don’t want to have to manually write this config file for every server we log in to.

To fix this issue we will add a simple configuration in to the user-data file that gets fed in to the CoreOS cloud-config when the server is created.  You can find more information about the CoreOS cloud-configs here.

The bit in the cloud config that needs to change is the following.

  - path: /home/core/.toolboxrc
    owner: core
    content: |

If you are already using cloud-config then this change should be easy, just add the bit starting with -path to your existing -write_files section.  New servers using this config will have the desired toolbox defaults.

This approach gives us an automated, reproducible way to clone our custom toolbox config to every server that uses cloud-config to bootstrap itself.  Once the config is in place simply run the “toolbox” command and it should use the custom values to pull the desired Ubuntu image.

Then you can run your Ubuntu commands and debugging tools from within the toolbox.  Everything else will be the same, we just use Ubuntu now as our default toolbox OS.  Here is the post that gave me the idea to do this originally.

About the Author: Josh Reichardt

Josh is the creator of this blog, a system administrator and a contributor to other technology communities such as /r/sysadmin and Ops School. You can also find him on Twitter and Facebook.

DevOps Conferences


I did a post quite awhile ago that highlighted some of the cooler system admin and operations oriented conferences that I had on my radar at that time.  Since then I have changed jobs and am now currently in a DevOps oriented position, so I’d like to revisit the subject and update that list to reflect some of the cool conferences that are in the DevOps space.

I’d like to start off by saying first that even if you can’t make it to the bigger conferences, local groups and meet ups are also an excellent way to get out and meet other professionals that do what you do. Local groups are also an excellent way to stay in the loop on what’s current and also learn about what others are doing.  If you are interested in eventually becoming a presenter or speaker, local meet ups and groups can be a great way to get started.  There are numerous opportunities and communities (especially in bigger cities), check here for information or to see if there is a DevOps meet up near you.  If there is nothing near by, start one!  If you can’t find any DevOps groups look for Linux groups or developer groups and network from there, DevOps is beginning to become popular in broader circles.

After you get your feet wet with meet ups, the next place to start looking is conferences that sound like they might be interesting to you.  There are about a million different opportunities to choose from, from security conferences, developer conferences, server and network conferences, all the way down the line.  I am sticking with strictly DevOps related conferences because that is currently what I am interested and know the best.

Feel free to comment if I missed any conferences that you think should be on this list.

DevOps Days (Multiple dates)

Perhaps the most DevOps centric of all the conference list.  These conferences are a great way to meet with fellow DevOps professionals and network with them.  The space and industry is changing constantly and being on top of all of the changes is crucial to being successful.  Another nice thing about the DevOps days is that they are spread out around the country (and world) and spread out throughout the year so they are very accessible.  WARNING:  DevOps days are not tied to any one set of DevOps tools but rather the principles and techniques and how to apply them to different environments.  If you are looking for super in depth technical talks, this one may not be for you.

ChefConf (March)

The main Chef conference.  There are large conferences for the main configuration management tools but I chose to highlight Chef because that’s what we use at my job.  There are lots of good talks that have a Chef centered theme but also are great because the practices can be applied with other tools.  For example, there are many DevOps themes at ChefConf including continuous integration and deployment topics, how to scale environments, tying different tools together and just general configuration management techniques.  Highly recommend for Chef users, feel free to substitute the other big configuration management tool conferences here if Chef isn’t your cup of tea (Salt, Puppet, Ansible).

CoreOS Fest (May)

  • 2015 videos haven’t been posted yet

Admittedly, this is a much smaller and niche conference but is still awesome.  The conference is the first one put on by the folks at CoreOS and was designed to help the community keep up with what is going on in the CoreOS and container world.  The venue is pretty small but the content at this years conference was very good.  There were some epic announcements and talks at this years conference, including Tectonic announcements and Kubernetes deep dives, so if container technology is something you’re interested in then this conference would definitely be worth checking out.

Velocity (May)

This one just popped up on my DevOps conference radar.  I have been hearing good things about this conference for awhile now but have not had the opportunity to go to it.  It always has interesting speakers and topics and a number of the DevOps thought leaders show up for this event.  One cool thing about this conference is that there are a variety of different topics at any one time so it offers a nice, wide spectrum of information.  For example, there are technical tracks covering different areas of DevOps.

DockerCon (June)

Docker has been growing at a crazy pace so this seems like the big conference to go check out if you are in the container space.  This conference is similar to CoreOS fest but focuses more heavily on topics of Docker (obviously).  I haven’t had a chance to go to one of these yet but containers and Docker have so much momentum it is very difficult to avoid.  As well, many people believe that container technologies are going to be the path to the future so it is a good idea to be as close to the action as you can.

Monitorama (June)

This is one of the coolest conferences I think, but that is probably just because I am so obsessed with monitoring and metrics collection.  Monitoring seems to be one of those topics that isn’t always fun to deal with or work around but talks and technologies at this conference actually make me excited about monitoring.  To most, monitoring is a necessary evil and a lot of the content from this conference can help make your life easier and better in all aspects of monitoring, from new trends and tools to topics on how to correctly monitor and scale infrastructures.  Talks can be technical but well worth it, if monitoring is something that interests you.

AWS Re:Invent (November)

This one is a monster.  This is the big conference that AWS puts on every year to announce new products and technologies that they have been working on as well as provide some incredibly helpful technical talks.  I believe this conference is one of the pricier and more exclusive conferences but offers a lot in the way of content and details.  This conference offers some of the best, most technical topics of discussion that I have seen and has been invaluable as a learning resource.  All of the videos from the conference are posted on YouTube so you can get access to this information for free.  Obviously the content is related to AWS but I have found this to be a great way to learn.


Even if you don’t have a lot of time to travel or get out to these conferences, nearly all of them post video from the event so you can watch it whenever you want to.  This is an INCREDIBLE learning tool and resource that is FREE.  The only downside to the videos is that you can’t ask any questions, but it is easy to find the presenters contact info if you are interested and feel like reaching out.

That being said, you tend to get a lot more out of attending the conference.  The main benefit of going to conferences over watching the videos alone is that you get to meet and talk to others in the space and get a feel for what everybody else is doing as well as check out many cool tools that you might otherwise never hear about.  At every conference I attend, I always learn about some new tech that others are using that I have never heard of that is incredibly useful and I always run in to interesting people that I would otherwise not have the opportunity to meet.

So definitely if you can, get out to these conferences, meet and talk to people, and get as much out of them as you can.  If you can’t make it, check out the videos afterwards for some really great nuggets of information, they are a great way to keep your skills sharp and current.

If you have any more conferences to add to this list I would be happy to update it!  I am always looking for new conferences and DevOps related events.

About the Author: Josh Reichardt

Josh is the creator of this blog, a system administrator and a contributor to other technology communities such as /r/sysadmin and Ops School. You can also find him on Twitter and Facebook.