Setting up a pfSense NAT instance in AWS

One important aspect of cloud deployments that often get overlooked, especially at start ups is the aspect of security.  So I thought I would take some time to go through the process of setting up a NAT instance on AWS with full firewall capabilities.  There are instructions and documentation for this process which are very good but aren’t completely clear so I will attempt to fill in some of the gaps I ran in to when attempting to set this up myself.

There is one thing to take note of if you have used pfSense before.  This firewall isn’t free.  There is a slight hourly charge for this that ends up coming out to about $500/yr (which comes out to about $42/month).  If you look at other commercial solutions with similar functionality you are looking at thousands of dollars per month in costs.  Long story short, the cloud images of pfSense has a tiny tiny cost associated with it but is very much worth it.

Just for reference I put together a few comparison prices.

  • Barracuda web app firewall – ($1.04-1.76/hr) (up to ~$1300/month)
  • Vyatta  ($0.30-1.50/hr) (up to ~$1100/month)
  • Sophos UTM ($0.35-$2.80/hr) (up to ~$2000/month)
  • pfSense ($0.07/hr) ($42/month)

As you can see, pfSense is very reasonable compared to some of the other bigger players.  You can build an r3.8xlarge instance and the software price won’t change which doesn’t seem to be the case with others.  One bonus to choosing pfSense is that you automatically qualify for support by agreeing to the ToS when getting the pfSense AMI set up.

Finally, pfSense is rock solid being built on top BSD and is thoroughly tested.  I have been running pfSense on other projects outside of AWS for 5+ years and have never had an issue with it outside of a dead hard drive one time.  Other added benefits of choosing pfSense are that updates are frequent and thoroughly tested, tons of add-ons including IPS’s and VPN’s so additional functionality can be built on top and great community support as well.

Getting started

There are a few good resources that I found to be useful when working through this problem, which got me most of the way to a working setup.  They are listed below.

And here is the link to my question about how to do this on serverfault, there is some good detail in the post over there.

Setting up the NAT in pfSense

The first issue that was confusing was the issue of getting the network interfaces set up and configured.  For this setup you will need two interfaces, preferably with static IP addresses.  You will also need to make sure that you disable source/destination checks for the interface that will be acting as the LAN interface that the nat goes through.  Disabling source and destination checks is pretty straightforward and is detailed in pretty much all of the guides.

You should note that there will be tabs for firewalling for LAN as well as WAN, if you can keep these two straight it should be much easier to troubleshoot and configure your pfSense machine.  Out of the box, the firewall on pfSense will not be configured to allow your LAN interface to do any sort of NATing, you will need to manually create rules to get started.  If you check the WAN firewall tab you should notice some access rules but the LAN tab should be empty.  Most of the work we will be doing will be on the LAN firewall.

The first rule to set up to make things easier to troubleshoot is a ping rule.  There is a WAN rule for ping but not for LAN.  You can essentially copy the WAN rule into a new one and modify it to look similar to the following.

LAN firewall rule

 

 

 

 

 

 

 

 

 

 

 

 

 

This rule will work for the template for the other rules that need to be put in to place.  The other rules will be for outbound web access.   Just copy this rule in to a new rule and change the protcol to TCP and make one rule that allows port 80 and another that allows 443.  The resulting should look similar to what I have listed below.

firewall rules for nat

Just a quick note.  If at any point you are having trouble seeing traffic or are getting stuck in your troubleshooting, an excellent way to figure out what is going on is the logging that is provided by pfSense.  You can access all of the various logs to see what is happening by selecting Status -> System Logs and the highlighting the firewall tab.

Modifying your outbound nat

Here is what your outbound NAT rule should look like.

outbound nat

 

 

 

 

 

 

 

 

 

 

 

 

 

Notice the “Networks_to_NAT” value in the source section.  This is a pfSense alias that can be used as a sort of variable to help ease management.  You can either use this alias or specify the local subnet you want to use here.  To check the values in your alias you can go to Firewall -> Aliases.

Conclusion

This setup will provide you with a nice easy way to manage your network in AWS.  The guides for setting up a NAT are nice and are a good first step but with a Firewall in place you can do so many other things, especially auditing that just aren’t available or viable with a straight AWS nat instance or that are way out of your price range with some of the other commercial solutions available.

pfSense also provides the capability to add more advanced tools like IDS/IPS, VPN and high availability if you choose so there is nice room for expansion.  Even if you don’t take advantage of all of the additional components of pfSense you will still have a rock solid firewall and nat instance that is suitable for production workloads at a fraction of the cost of other commercial solutions.

Read More

CLI hotkey and navigation guide

I have been meaning to write this post for quite a while now but have always managed to forget.  I have been piecing together useful terminal shortcuts, commands and productivity tools since I started using Linux back in the day.  If you spend any amount of time in the terminal you should hopefully know about some of these tricks already but more importantly, if you’re like me, are always looking for ways to improve the efficiency of your bash workflow and making your life easier.

There are a few things that I would quickly like to note.  If you use tmux as your CLI session manager you may not be able to use some of the mentioned hotkeys to get around by default if you don’t have some settings turned on in your configuration file.

You can take a look at my custom .tmux.conf file if you’re interested in screen style bindings plus configuration for hotkeys.  If you simply want to add the option that turns on the correct hotkey bindings for your terminal, add this line to your ~/.tmux.conf file

set-window-option -g xterm-keys on

Also, if you are a Mac user, and don’t already know about it, I highly recommend checking out iTerm2.  Coming primarily from a Linux background the hotkey bindings in Mac OS X are a little bit different than what I am used to and were initially a challenge for me to get accustomed to.  The transition for me took a little bit but iTerm has definitely helped me out immensely, as well as a few other ticks learned along the way.  I really haven’t dug through all the options in iTerm but there are a huge number of options and customizations that can made.

The only thing I have been interested in so far is the navigation which I will highlight below.

Adjust iTerm keybindings – As I mentioned, I am used to using Linux keybinding so a natural fit for my purposes is the option key.  The first step is to disable the custom binding in the iTerm preferences.  To do this, click iTerm -> Preferences -> Profiles -> Keys and find the binding for option left arrow and option right arrow and remove them from the default profile.

Next, add the following to your global key bindings, iTerm -> Preferences -> Keys.

iterm2

 

 

 

Move left one word

  • Keyboard shortcut: ??
  • Action: Send Escape Sequence
  • Escape: b

Move right one word

  • Keyboard shortcut: ??
  • Action: Send Escape Sequence
  • Escape: f

Finally, it is also worth pointing out that I use zsh for my default shell.  There are some really nice additions that zsh offers over vanilla bash.  I recently ran across this blog post which has some awesome tips.  I have also written about switching to zsh here.  Anyway, here is the lis.  It will grow as I find more tips.

Basic navigation:

  • Ctrl-left/right arrow – Jump between words quickly.
  • Opt-left/right arrow – Custom iTerm binding for jumping between words quickly.
  • Alt-left/right arrow – Linux only.  Jump between words quickly.
  • Esc-b/f – Mac OS.  Similar to arrow keys, move between words quickly.
  • Alt-b – Linux only.  Jump back one word.  Handy with other hotkeys overridden.
  • Ctrl-a – Jump to the beginning of a line (doesn’t work with tmux mappings).
  • Ctrl-e – Jump to the end of a line.
  • End – SImilar to ctrl-e this will send your cursor to the end of the line.
  • Home – Similar to End, except jumps to the beginning of the line.

Intermediate navigation:

  • Ctrl-u – Copy entire command to clipboard.
  • Ctrl-y – Paste previously copied ctrl-u command in to the terminal.
  • Ctrl-w – Cut a word to the left of the cursor.
  • Alt-d – Cut after word after the cursor position

Advanced use:

  • Ctrl-x Ctrl-e – Zsh command.  Edit the current command in your $EDITOR, which for me is vim
  • Ctrl-r – Everybody hopefully should know this one.  It is basically recursive search history
  • Ctrl-k – Erase everything after the current cursor position.  Handy for long commands
  • !<command>:p – Print the last command
  • cd … – Zsh command.  This can be easily aliased but will jump up two directories
  • !$ – Quickly access the last argument of the last command.

Zsh tab completion

Tab completion with Zsh is awesome, it’s like bash completion on steroids.  I will attempt to highlight some of my favorite tab completion tricks with Zsh.

Directory shorthand – Say you need to get to a directory that is nested deeply.  You can use the first few characters that will uniquely match to that directory to navigate instead of typing out the whole command.  So for example, cd /u/lo/b will expand out to /usr/local/bin.

command specific history – This one comes in handy all the time.  If you need to grab a command that you don’t use very often you can user Ctrl-r to match the first part of the command and from there you can up arrow to locate the command you typed.

Spelling and case correction – Bash by default can get annoying if you have a long command typed out but somehow managed to typo part of the command.  In zsh this is (sometimes) corrected for you automatically when you <tab> to complete the command.  For example if you are changing dirs in to the ‘Documents’ directory you can type ‘cd ~/doc/’ and the correct location will be expanded for you.

This list will continue to grow as I find more handy shortcuts, hotkeys or generally other useful tips and tricks that I find in my day to day command line work.  I really want to build a similar list for things in Vim but my Vim skills are unfortunately lacking plus there is already some really nice documentation and guidance out there already.  If you are interested in writing up a Vim productivity post I would love to post it.  Likewise, if you have any other nice shortcuts or tips you think are worth mentioning, post them in the comments and I will try to get them added to the list.

Read More

Add environment variable file to fig

If you haven’t heard of fig and are using Docker, you need to check it out.  Basically Fig is a tool that allows users to quickly create development environments using Docker.  Fig alleviates the complexity and tediousness of having to manually bring containers up and down, stitch them together and basically orchestrate a Docker environment.  On top of this, Fig offers some other cool functionality, for example, the ability to scale up applications.  I am excited to see what happens with the project because it was recently merged in to the Docker project.  My guess is that there will be many new features and additions to either Docker if Fig gets rolled in to the Docker core project.  For now, you can check out Fig here if you haven’t heard of it and are interested in learning more.

One issue that I have run in to is that there is currently not a great way to handle a large number of environment variables in fig.  In Docker there is an option that allows a user to pass in an environment variable file with the –env file <filename> flag.  To do the same with Fig in its current form, you are forced to list out each individual environment variable in your configuration which can quickly become tedious and confusing.

There is a PR out for adding in the ability to pass an environment variable file in to fig via the env-file option in a fig.yml file.  This approach to me is much easier than adding each environmental variable separately to the configuration with the environment option as well as having to update the fig.yml configuration if any of the values ever change.  I know that functionality like this will get merged eventually but until then I have been using the PR as a workaround to solve this issue, I think that this is also a good opportunity to show people how to get a project working manually with custom changes.  Luckily the fix isn’t difficult.

This post will assume that you have git, python and pip installed.  If you don’t have these tools installed go ahead and get that done.  The first step is to clone the fig project on github on to your local machine, see above for the link to the PR.

git clone [email protected]:docker/fig.git

Jump in to the fig project you just cloned and edit the service.py file.  This is the file that handles the processing of environment variables.  There are a few sections that need to be updated in the config.  Check the PR to be sure, but at the time of the writing, the following code should be added.

Line 55

- supported_options = DOCKER_CONFIG_KEYS + ['build', 'expose']
+ supported_options = DOCKER_CONFIG_KEYS + ['build', 'expose', 'env-file']

Line 318

+ def _get_environment(self):
+ env = {}
+
+ if 'env-file' in self.options:
+ env = env_vars_from_file(self.options['env-file'])
+
+ if 'environment' in self.options:
+ if isinstance(self.options['environment'], list):
+ env.update(dict(split_env(e) for e in self.options['environment']))
+ else:
+ env.update(self.options['environment'])
+
+ return dict(resolve_env(k, v) for k, v in env.iteritems())
+

LIne 352

- if 'environment' in container_options:
- if isinstance(container_options['environment'], list):
- container_options['environment'] = dict(split_env(e) for e in container_options['environment'])
- container_options['environment'] = dict(resolve_env(k, v) for k, v in container_options['environment'].iteritems())
+ container_options['environment'] = self._get_environment()

Line 518

+
+
+def env_vars_from_file(filename):
+ """
+ Read in a line delimited file of environment variables.
+ """
+ env = {}
+ for line in open(filename, 'r'):
+ line = line.strip()
+ if line and not line.startswith('#') and '=' in line:
+ k, v = line.split('=', 1)
+ env[k] = v
+ return env

That should be it.  Now you should be able to install fig with the new changes.  Make sure you are in the root fig directory that contains the setup.py file.

sudo python setup.py develop

Now you should be able to edit your fig.yml file to reflect the changes that have been added to fig via env-file.  Here is what a sample configuration might look like.

testcontainer:
 image: username/testcontainer
 ports:
 - "8080:8080"
 links:
 - "mongodb:mongodb"
 env-file: "/home/username/test_vars"

Notice that nothing else changed.  But instead of having to list out environment variables one at a time we can simply read in a file.  I have found this to be very useful for my workflow, I hope others can either adapt this or use this as well.  I have a feeling this will get merged in to fig at some point but for now this workaround works.

Read More

boot2docker

Introduction to boot2docker

If you work on a Mac (or Windows) and use Docker then you probably have heard of boot2docker.  If you haven’t heard of it before, boot2docker is basically a super lightweight Linux VM that is designed to run Docker containers.  Unfortunately there is no support (yet) in Mac OS X or Windows kernels for running Docker natively so this lightweight VM must be used as an intermediary layer that allows the host Operating Systems to communicate with the Docker daemon running inside the VM.  This solution really is not that limiting once you get introduced to and become comfortable with boot2docker and how to work around some of the current limitations.

Because Docker itself is such a new piece of software, the ecosystem and surrounding environment is still expanding and growing rapidly.  As such, the tooling has not had a great deal of time to mature.  So with pretty much anything that’s new, especially in the software and Open Source world, there are definitely some nuances and some things to be aware of when working with boot2docker.

That being said, the boot2docker project bridges a gap and does a great job of bringing Docker to an otherwise incompatible platform as well as making it easy to use across platforms, which especially useful for furthering the adoption of Docker among Mac and Windows users.

When getting started with boot2docker, it is important to note that there are a few different things going on under the hood.

Components

The first component is VirtualBox.  If you are familiar with virtual machines, there’s pretty much nothing new here.  It is the underpinning of running the VM and is a common tool for creating and managing VM’s.  One important note here about VBox.  This is currently the key to make volume sharing work with boot2docker to allow a user to pass local directories and files in to containers using its shared folder implementation.  Unfortunately it has been pretty well documented that vboxsf (shared folders) have not great performance when compared to other solutions.  This is something that the boot2docker team is aware of and working on for future reference.  I have a workaround that I will outline below if anyone happens to hit some of these performance issues.

The next component is the VM.  This is a super light weight image based on Tiny Core Linux and the 3.16.4 Linux kernel with AUFS to leverage Docker.  Other than that there is pretty much nothing else to it.  The TCL image is about 27MB and boots up in about 5 seconds, making it very fast, which is nice to get going with quickly.  There are instructions on the boot2docker site for creating custom .iso’s if you are interested as well if you are’t interested in building your own customized TCL.

The final component is called boot2docker-cli, which is normally referred to as the management tool.  This tools does a lot of the magic to get your host talking to the VM with minimal interaction.  It is basically the glue or the duct tape that allows users to pass commands from a local shell in to the container and get Docker to do stuff.

Installation

It is pretty dead simple to get boot2docker set up and configured.  You can download everything in one shot from the links on their site http://boot2docker.io or you can install manually on OSX with brew and a few other tools.  I highly recommend the packaged installer, it is very straight forward and easy to follow and there is a good video depiction of the process on the boot2docker site.

If you choose to install everything with brew you can use the following commands as a reference.  Obviously it will be assumed that brew is already installed and set up on your OSX system.  The first step is to install boot2docker.

brew install boot2docker

You might need to install Virtualbox separately using this method, depending on whether or not you already have a good version of Virtualbox to use with boot2docker.

The following commands will assume you are starting from scratch and do not have VBox installed.

brew update
brew tap phinze/homebrew-cask
brew install brew-cask
brew cask install virtualbox

That is pretty much it for installation.

Usage

The boot2docker CLI is pretty straight forward to use.  There are a bunch of commands to help users interface with the boot2docker VM from the command line.  The most basic and simple usage to initialize and create a vanilla boot2docker VM can be done with the following command.

boot2docker init

This will pull down the correct image and get the environment set up.  Once the VM has been created (see the tricks sections for a bit of customization) you are ready to bring up the VM.

boot2docker start

This command will simply start up the boot2docker VM and run some behind the scenes  tasks to help make using the VM seamless.  Sometimes you will be asked to set ENV variables here, just go ahead and follow the instructions to add them.

There are a few other nice commands that help you interact with the boot2docker VM.  For example if you are having trouble communicating with the VM you can run the ip command to gather network information.

boot2docker ip

If the VM somehow gets shut off or you cannot access it you can check its status.

boot2docker status

Finally there is a nice help command that serves as a good guide for interacting with the VM in various ways.

boot2docker help

The commands listed in this section will for the most part cover 90% of interaction and usage of the boot2docker VM.   There is a little bit of advanced usage with the cli covered below in the tricks section.

Tricks

You can actually modify some of the default the behavior of your boot2docker VM by altering some of the underlying boot2docker configurations.  For example, boot2docker will look in $HOME/.boot2docker/profile for any custom settings you may have.  If you want to change any network settings, adjust memory or cpu or a number of settings, you would simply change the profile to reflect the updated changes.

You can also override the defaults when you create your boot2docker VM by passing some arguments in.  If you want to change the memory or disk size by default, you would run something like

boot2docker init --memory=4096 --disksize=60000

Notice the –disksize=60000.  Docker likes to take up a lot of disk space for some of its operations, so if you can afford to, I would very highly recommending that you adjust the default disk size for the VM to avoid any strange running out of disk issues.  Most Macbooks or Windows machines have plenty of extra resources and big disks so usually there isn’t a good reason to not leverage the extra horsepower for your VM.

Troubleshooting

One very useful command for gathering information about your boot2docker environment is the boot2docker config command.  This command will give you all the basic information about the running config.  This can be very valuable when you are trying to troubleshoot different types of errors.

If you are familiar with boot2docker already you might have noticed that it isn’t a perfect solution and there are some weird quirks and nuances.  For example, if you put your host machine to sleep while the boot2docker VM is still running and then attempt to run things in Docker you may get some quirky results or things just won’t work.  This is due to the time skew that occurs when you put the machine to sleep and wake it up again, you can check the github issue for details.  You can quickly check if the boot2docker VM is out of sync with this command.

date -u; boot2docker ssh date -u

If you notice that the times don’t match up then you know to update your time settings.  The best fix for now that I have found for now is to basically reset the time settings by wrapping the following commands in to a script.

#!/bin/sh
 
# Fix NTP/Time
boot2docker ssh -- sudo killall -9 ntpd
boot2docker ssh -- sudo ntpclient -s -h pool.ntp.org
boot2docker ssh -- sudo ntpd -p pool.ntp.org

For about 95% of the time skew issues you can simply run sudo ntpclient -s -h pool.ntp.org to take care of the issue.

Another interesting boot2docker oddity is that sometimes you will not be able to connect to the Docker daemon or will sometimes receive other strange errors.  Usually this indicates that the environment variables that get set by boot2docker have disappeared,  if you close your terminal window or something similar for example.  Both of the following errors indicate the issue.

dial unix /var/run/docker.sock: no such file or directory

or

Cannot connect to the Docker daemon. Is 'docker -d' running on this host?

The solution is to either add the ENV variables back in to the terminal session by hand or just as easily modify your bashrc config file to read the values in when the terminal loads up.  Here are the variables that need to be reset, or appended to your bashrc.

export DOCKER_CERT_PATH=/Users/jmreicha/.boot2docker/certs/boot2docker-vm
export DOCKER_TLS_VERIFY=1
export DOCKER_HOST=tcp://192.168.59.103:2376

Assuming your boot2docker VM has an address of 192.168.59.103 and a port of 2376 for communication.

Shared folders

This has been my biggest gripe so far with boot2docker as I’m sure it has been for others as well.  Mostly I am upset that vboxsf are horrible and in all fairness the boot2docker people have been awesome getting this far to get things working with vboxsf as of release 1.3.  Another caveat to note if you aren’t aware is that currently, if you pass volumes to docker with “-v”, the directory you share must be located within the “/Users” directory on OSX.  Obviously not a huge issue but something to be aware if you happen to have problems with volume sharing.

The main issue with vboxsf is that it does not do any sort of caching sort of caching so when you are attempting to share a large amount of small files (big git repo’s) or anything that is filesystem read heavy (grunt) performance becomes a factor.  I have been exploring different workarounds because of this limitation but have not found very many that I could convince our developers to use.  Others have had luck by creating a container that runs SMB or have been able to share a host directory in to the boot2docker vm with sshfs but I have not had great success with these options.  If anybody has these working please let me know I’d love to see how to get them working.

The best solution I have come up with so far is using vagrant with a customized version of boot2docker with NFS support enabled, which has very little “hacking” to get working which is nice.  And a good enough selling point for me is the speed increase by using NFS instead of vboxsf, it’s pretty staggering actually.

This is the project that I have been using https://vagrantcloud.com/yungsang/boxes/boot2docker.  Thanks to @yungsang for putting this project together.  Basically it uses a custom vagrant-box based off of the boot2docker iso to accomplish its folder sharing with the awesome customization that Vagrant provides.

To get this workaround to work, grab the vagrantfile from the link provided above and put that in to the location you would like to run Vagrant from.  The magic sauce in the volume sharing is in this line.

onfig.vm.synced_folder ".", "/vagrant", type: "nfs"

Which tells Vagrant to share your current directory in to the boot2docker VM in the /vagrant directory, using NFS.  I would suggest modifying the default CPU and memory as well if your machine is beefy enough.

v.cpus = 4
v.memory = 4096

After you make your adjustments, you just need to spin up the yungsang version of boot2docker and jump in to the VM.

vagrant up
vagrant ssh

From within the VM you can run your docker commands just like you normally would.  Ports get forwarded through to your local machine like magic and everybody is happy.

Read More

docker developer tools

Useful tools for Docker development

Docker is still a young project, and as such the ecosystem around it hasn’t quite matured to the point that many people feel quite comfortable using it at this point.  It is nice to have such a fast growing set of tools, however the downside to all of this is that many of the tools are not production ready.  I think as the ecosystem solidifies and Docker adoption grows we will see a healthy set of solid, production ready tools that are built off of the current generation of tools.

Once you get introduced to the concepts and ideas behind Docker you quickly realize the power and potential that it holds.   Inevitably though, there comes a “now what?” moment where you basically realize that Docker can do some interesting things but get stuck because there are barriers to simply dropping Docker into a production environment.

One problem is that you can’t simply “turn on” Docker in your environment, so you need tools to help manage images and containers, manage orchestration, development, etc.  So there are a number of challenged to take Docker and start doing useful and interesting things with it once you get past the introductory novelty of building an image and deploying simple containers.

I will attempt to make sense of the current state of Docker and to help take some of the guesswork out of which tools to use in which situations and scenarios for those that are hesitant to adopt Docker.  This post will focus mostly around the development aspects of the Docker ecosystem because that is a nice gateway to working with and getting acquainted with Docker.

Boot2Docker

As you may be aware, Docker does not (yet) support MacOSX or Windows.  This can definitely be a hindrance for adopting and building Docker acceptance amongst developers.  Boot2Docker massively simplifies this issue by essentially creating a sandbox to work with Docker as a thin layer between Docker and Mac (or Windows) via the boot2docker VM.

You can check it out here, but essentially you will download a package and install it and you are ready to start hacking away on Docker on your Mac.  Definitely a must for Mac OS as well as Windows users that are looking to begin their Docker journey, because the complexity is completely removed.

Behind the scenes, a number of things get abstracted away and simplified with Boot2docker, like setting up SSH keys, managing network interfaces, setting up VM integrations and guest additions, etc.  Boot2docker also bundles together with a cli for managing the VM that manages docker so it is easy to manage and configure the VM from the terminal.

CoreOS

It would take many blog posts to try to describe everything that CoreOS and its tooling can do.  The reason I am mentioning it here is because CoreOS is one of those core building blocks that are recently becoming necessary in any Docker environment.  Docker as it is today, is not specifically designed for distributed workloads and as such doesn’t provide much of the tooling around how to solve challenges that accompany distributed systems.  However, CoreOS bridges this gap very well.

CoreOS is a minimal Linux distribution that aims to help with a number of Docker related tasks and challenges.  It is distributed by its design so can do some really interesting things with images and containers using etcd, systemd, fleetd, confd and others as the platform continues to evolve.

Because of this tooling and philosophy, CoreOS machines can be rebooted on the fly without interrupting services and clustered processes across machines.  This means that maintenance can occur whenever and wherever, which makes the resiliency factor very high for CoreOS servers.

Another highlight is its security model, which is a push based model.  For example, instead of manually updating servers with security patches, the CoreOS maintainers periodically push updates to servers, alleviating the need to update all the time.  This was very nice when the latest shellshock vulnerability was released because within a day or so, a patch was automatically pushed to all CoreOS servers, automating the otherwise tedious process of updating servers, especially without config management tools.

Fig

Fig is a must have for anybody that works with Docker on a regular basis, ie developers.  Fig allows you to define your environment in a simple YAML config file and then bring up an entire development environment in one command, with fig up.

Fig works very well for a development work flow because you can rapidly prototype and test how Docker images will work together and eliminate issues that might crop up without being able to test things so easily.  For example if you are working on an application stack you can simply define how the different containers should work and interact together from the fig file.

The downside to fig is that in its current form, it isn’t really equipped to deal with distributed Docker hosts, something that you will find a large number of projects are attempting to solve and simplify.  This shouldn’t be an issue though, if you are aware of its limitations beforehand and know that there are some workloads that fig is not built for.

Panamax

This is a cool project out of Century Link Labs that aims to solve problems around docker app development and orchestration.  Panamax is similar to Fig in that it stitches containers together logically but is slightly different in  a few regards.  First, Panamax builds off of CoreOS to leverage some of its built in tools, etcd, fleetd, etc.  Another thing to note is that currently Panamax only supports single host deployments.  The creators of Panamax have stated that clustered support and multi host tenancy is in the works but for now you will have to use Panamax on a single host.

Panamax simplifies Docker images and application orchestration (kind of), in the background and additionally places a nice layer of abstraction on top of this process so that managing the Docker image “stack” becomes even easier, through a slick GUI.  With the GUI you can set environment variables, link containers together, bind ports and volumes.

Panamax draws a number of its concepts from Fig.  It uses templates as the underlying way to compose containers and applications, which is similar to the Fig config files, as both use YAML files to compose and orchestrate Docker container behavior.  Another cool thing about Panamax is that there is a public template repo for getting different application and container stacks up and running, so the community participation is a really nice aspect of the project.

If setting up a command line config file isn’t an ideal solution in your environment, this tool is definitely worth a look.  Panamax is a great way to quickly develop and prototype Docker containers and applications.

Flocker

This is a very young but interesting project.  The project looks interesting because of the way that it handles and deals volume management.  Right now one of the biggest challenges to widespread Docker adoption is exactly the problem that Flocker solves in its ability to persist storage across distributed hosts.

From their github page:

Flocker is a data volume manager and multi-host Docker cluster management tool. With it you can control your data using the same tools you use for your stateless applications by harnessing the power of ZFS on Linux.

Basically, Flocker is using some ZFS magic behind the scenes to allow volumes to float between servers, to allow for persistent storage across machines and containers.  That is a huge win for building distributed systems that require persisnt data and storage, eg databases.

Definitely keep an eye on this project for improvements and look for them to push this area in the future.  The creators have said it isn’t production ready just quite yet but is  a great tool to use in a test or staging environment.

Flynn

Flynn touts itself as a Platform as a Service (PasS) built on Docker, in a very similar vein to Heroku.  Having a Docker PaaS is a huge win for developers because it simplifies developer workflow.  There are some great benefits of having a PaaS in your environment, the subject could easily expand to be its own topic of conversation.

The approach that Flynn takes (and Paas in general) is that operations should be a product team.  With Flynn, ops can provide the platform and developers can focus on their tasks much more easily, developing software, testing and generally freeing developers the time to focus on development tasks instead of fighting operations.  Flynn does a nice job of decoupling operations tasks from dev tasks so that the developers don’t need to rely on operations to do their work and operations don’t need to concern themselves with development tasks which can cause friction and create efficiency issues.

Flynn works by basically tying a number of different tools together created specifically to solve challenges of building a PaaS to perform their workloads via Docker (scheduling, persistent storage, orchestration, clustering, etc) as one single entity.

Currently its developers state that Flynn is not quite suitable for production use yet, but it is still mature enough to use and play around with and even deploy apps to.

Deis

Deis is another PaaS for Docker, aiming to solve the same problems and challenges that Flynn does, so there is definitely some overlap in the projects, as far as end users are concerned.  There is a nice CLI tool for manaing and intereacting with Deis and it offers much of the same functionality that either heroku or Flynn offer.  Deis can do things like horizontal application scaling, supports many different application frameworks and is Open Source.

Deis is similar in concept to Flynn in that it aims to solve PaaS challenges but they are quite different in their implementation and how they actually achieve their goals.

Both Flynn and Deis aim to create platforms to build Docker apps on top of but do so in somewhat different means.  As the creator Deis explains, Deis is very much more practical in its approach to solving PaaS issues because it is basically taking a number of available technologies and tools that have already been created and is fitting them together only creating the pieces that are missing,  while Flynn seems to be very much more ambitious in its approach due to the fact that it is implementing a number of its own tooling and solutions, including its own scheduler, registration service, etc and only relying on a few tools that are already in existence.  For example, while Flynn does all of these different things, Deis leverages CoreOS to do many of the tasks it needs to operate and work correctly while minimally bolting on tooling that it needs to function correctly.

Conclusion

As the Docker ecosystem continues to evolve, more and more options seem to be sprouting up.  There are already a number of great tools in the space but as the community continues to evolve I believe that the current tools will continue to improve and new and useful tools will be built for Docker specific workloads.  It is really cool to see how the Docker ecosystem is growing and how the tools and technologies are disrupting traditional views on a number of areas in tech including virtualization, DevOps, development, deployments and application development, among others.

I anticipate the adoption of Docker to continue growing for the foreseeable future as the core Docker project continues to improve and stabilize as well as the tools tools built around it that I have discussed here.  It will be interesting to see where things are even six months from now in regards to the adoption and use cases that Docker has created.

Read More