Category Archives: Linux

Useful tools for Docker development

docker developer tools

Docker is still a young project, and as such the ecosystem around it hasn’t quite matured to the point that many people feel quite comfortable using it at this point.  It is nice to have such a fast growing set of tools, however the downside to all of this is that many of the tools are not production ready.  I think as the ecosystem solidifies and Docker adoption grows we will see a healthy set of solid, production ready tools that are built off of the current generation of tools.

Once you get introduced to the concepts and ideas behind Docker you quickly realize the power and potential that it holds.   Inevitably though, there comes a “now what?” moment where you basically realize that Docker can do some interesting things but get stuck because there are barriers to simply dropping Docker into a production environment.

One problem is that you can’t simply “turn on” Docker in your environment, so you need tools to help manage images and containers, manage orchestration, development, etc.  So there are a number of challenged to take Docker and start doing useful and interesting things with it once you get past the introductory novelty of building an image and deploying simple containers.

I will attempt to make sense of the current state of Docker and to help take some of the guesswork out of which tools to use in which situations and scenarios for those that are hesitant to adopt Docker.  This post will focus mostly around the development aspects of the Docker ecosystem because that is a nice gateway to working with and getting acquainted with Docker.

Boot2Docker

As you may be aware, Docker does not (yet) support MacOSX or Windows.  This can definitely be a hindrance for adopting and building Docker acceptance amongst developers.  Boot2Docker massively simplifies this issue by essentially creating a sandbox to work with Docker as a thin layer between Docker and Mac (or Windows) via the boot2docker VM.

You can check it out here, but essentially you will download a package and install it and you are ready to start hacking away on Docker on your Mac.  Definitely a must for Mac OS as well as Windows users that are looking to begin their Docker journey, because the complexity is completely removed.

Behind the scenes, a number of things get abstracted away and simplified with Boot2docker, like setting up SSH keys, managing network interfaces, setting up VM integrations and guest additions, etc.  Boot2docker also bundles together with a cli for managing the VM that manages docker so it is easy to manage and configure the VM from the terminal.

CoreOS

It would take many blog posts to try to describe everything that CoreOS and its tooling can do.  The reason I am mentioning it here is because CoreOS is one of those core building blocks that are recently becoming necessary in any Docker environment.  Docker as it is today, is not specifically designed for distributed workloads and as such doesn’t provide much of the tooling around how to solve challenges that accompany distributed systems.  However, CoreOS bridges this gap very well.

CoreOS is a minimal Linux distribution that aims to help with a number of Docker related tasks and challenges.  It is distributed by its design so can do some really interesting things with images and containers using etcd, systemd, fleetd, confd and others as the platform continues to evolve.

Because of this tooling and philosophy, CoreOS machines can be rebooted on the fly without interrupting services and clustered processes across machines.  This means that maintenance can occur whenever and wherever, which makes the resiliency factor very high for CoreOS servers.

Another highlight is its security model, which is a push based model.  For example, instead of manually updating servers with security patches, the CoreOS maintainers periodically push updates to servers, alleviating the need to update all the time.  This was very nice when the latest shellshock vulnerability was released because within a day or so, a patch was automatically pushed to all CoreOS servers, automating the otherwise tedious process of updating servers, especially without config management tools.

Fig

Fig is a must have for anybody that works with Docker on a regular basis, ie developers.  Fig allows you to define your environment in a simple YAML config file and then bring up an entire development environment in one command, with fig up.

Fig works very well for a development work flow because you can rapidly prototype and test how Docker images will work together and eliminate issues that might crop up without being able to test things so easily.  For example if you are working on an application stack you can simply define how the different containers should work and interact together from the fig file.

The downside to fig is that in its current form, it isn’t really equipped to deal with distributed Docker hosts, something that you will find a large number of projects are attempting to solve and simplify.  This shouldn’t be an issue though, if you are aware of its limitations beforehand and know that there are some workloads that fig is not built for.

Panamax

This is a cool project out of Century Link Labs that aims to solve problems around docker app development and orchestration.  Panamax is similar to Fig in that it stitches containers together logically but is slightly different in  a few regards.  First, Panamax builds off of CoreOS to leverage some of its built in tools, etcd, fleetd, etc.  Another thing to note is that currently Panamax only supports single host deployments.  The creators of Panamax have stated that clustered support and multi host tenancy is in the works but for now you will have to use Panamax on a single host.

Panamax simplifies Docker images and application orchestration (kind of), in the background and additionally places a nice layer of abstraction on top of this process so that managing the Docker image “stack” becomes even easier, through a slick GUI.  With the GUI you can set environment variables, link containers together, bind ports and volumes.

Panamax draws a number of its concepts from Fig.  It uses templates as the underlying way to compose containers and applications, which is similar to the Fig config files, as both use YAML files to compose and orchestrate Docker container behavior.  Another cool thing about Panamax is that there is a public template repo for getting different application and container stacks up and running, so the community participation is a really nice aspect of the project.

If setting up a command line config file isn’t an ideal solution in your environment, this tool is definitely worth a look.  Panamax is a great way to quickly develop and prototype Docker containers and applications.

Flocker

This is a very young but interesting project.  The project looks interesting because of the way that it handles and deals volume management.  Right now one of the biggest challenges to widespread Docker adoption is exactly the problem that Flocker solves in its ability to persist storage across distributed hosts.

From their github page:

Flocker is a data volume manager and multi-host Docker cluster management tool. With it you can control your data using the same tools you use for your stateless applications by harnessing the power of ZFS on Linux.

Basically, Flocker is using some ZFS magic behind the scenes to allow volumes to float between servers, to allow for persistent storage across machines and containers.  That is a huge win for building distributed systems that require persisnt data and storage, eg databases.

Definitely keep an eye on this project for improvements and look for them to push this area in the future.  The creators have said it isn’t production ready just quite yet but is  a great tool to use in a test or staging environment.

Flynn

Flynn touts itself as a Platform as a Service (PasS) built on Docker, in a very similar vein to Heroku.  Having a Docker PaaS is a huge win for developers because it simplifies developer workflow.  There are some great benefits of having a PaaS in your environment, the subject could easily expand to be its own topic of conversation.

The approach that Flynn takes (and Paas in general) is that operations should be a product team.  With Flynn, ops can provide the platform and developers can focus on their tasks much more easily, developing software, testing and generally freeing developers the time to focus on development tasks instead of fighting operations.  Flynn does a nice job of decoupling operations tasks from dev tasks so that the developers don’t need to rely on operations to do their work and operations don’t need to concern themselves with development tasks which can cause friction and create efficiency issues.

Flynn works by basically tying a number of different tools together created specifically to solve challenges of building a PaaS to perform their workloads via Docker (scheduling, persistent storage, orchestration, clustering, etc) as one single entity.

Currently its developers state that Flynn is not quite suitable for production use yet, but it is still mature enough to use and play around with and even deploy apps to.

Deis

Deis is another PaaS for Docker, aiming to solve the same problems and challenges that Flynn does, so there is definitely some overlap in the projects, as far as end users are concerned.  There is a nice CLI tool for manaing and intereacting with Deis and it offers much of the same functionality that either heroku or Flynn offer.  Deis can do things like horizontal application scaling, supports many different application frameworks and is Open Source.

Deis is similar in concept to Flynn in that it aims to solve PaaS challenges but they are quite different in their implementation and how they actually achieve their goals.

Both Flynn and Deis aim to create platforms to build Docker apps on top of but do so in somewhat different means.  As the creator Deis explains, Deis is very much more practical in its approach to solving PaaS issues because it is basically taking a number of available technologies and tools that have already been created and is fitting them together only creating the pieces that are missing,  while Flynn seems to be very much more ambitious in its approach due to the fact that it is implementing a number of its own tooling and solutions, including its own scheduler, registration service, etc and only relying on a few tools that are already in existence.  For example, while Flynn does all of these different things, Deis leverages CoreOS to do many of the tasks it needs to operate and work correctly while minimally bolting on tooling that it needs to function correctly.

Conclusion

As the Docker ecosystem continues to evolve, more and more options seem to be sprouting up.  There are already a number of great tools in the space but as the community continues to evolve I believe that the current tools will continue to improve and new and useful tools will be built for Docker specific workloads.  It is really cool to see how the Docker ecosystem is growing and how the tools and technologies are disrupting traditional views on a number of areas in tech including virtualization, DevOps, development, deployments and application development, among others.

I anticipate the adoption of Docker to continue growing for the foreseeable future as the core Docker project continues to improve and stabilize as well as the tools tools built around it that I have discussed here.  It will be interesting to see where things are even six months from now in regards to the adoption and use cases that Docker has created.

Autosnap AWS snapshot and volume management tool

This is my first serious attempt at a Python tool on github.  I figured it was about time, as I’ve been leveraging Open Source tools for a long time, I might as well try to give a little bit back.  Please check out the project and leave feedback by emailing, opening a github or issue or commenting here, I’d love to see what can be done with this tool, there are lots of bugs to shake out and things to improve.  Even better if you have some code you’d like to contribute, this is very much a work in progress!

Here is the project – https://github.com/jmreicha/autosnap.

Introduction

Essentially, this tool is designed to ease the management of the snapshot and volume lifecycle in an AWS environment.  I have discovered that snapshots and volumes can be used together to form a simple backup management system, so by simplifying the management of these resources, by utilizing the power of the AWS API, you can easily manage backups of your AWS data.

While this obviously isn’t a full blown backup tool, it can do a few handy things like leverage tags to create and destroy backups based on custom expiration dates and create snapshots based on a few other criteria, all managed with tags.  Another cool thing about handling backups this way is that you get amazing resiliency by storing snapshots to S3, as well as dirt cheap storage.  Obviously if you have a huge number of servers and volumes your mileage will vary, but this solution should scale up in to the hundreds, if not thousands pretty easily.  The last big bonus is that you can nice granularity for backups.

For example, if you wanted to keep a weeks worth of backups across all your servers in a region, you would simply use this tool to set an expiration tag of 7 days and voila.  You will have rolling backups, based on snapshots for the previous seven days.  You can get the backup schedule fairly granular, because the snapshots are tagged down to the hour. It would be easy to get them down to the second if that is something people would find useful, I could see DB snapshots being important enough but for now it is set to the hour.

The one drawback is that this needs to be run on a daily basis so you would need to add it to a cron job or some other tool that runs tasks periodically.  Not a drawback really as much of a side note to be aware of.

Configuration

There is a tiny bit of overhead to get started, so I will show you how to get going.  You will need to either set up a config file or let autosnap build you one.  By default, autosnap will help create one the first time you run it, so you can use this command to build it:

autosnap

If you would like to provide your own config, create a file called ‘.config‘ in the base directory of this project.  Check the README on the github page for the config variables and for any clarifications you may need.

Usage

Use the –help flag to get a feeling for some of the functions of this tool.

$ autosnap --help

usage: autosnap [--config] [--list-vols] [--manage-vols] [--unmanage-vols]
 [--list-snaps] [--create-snaps] [--remove-snaps] [--dry-run]
 [--verbose] [--version] [--help]

optional arguments:
 --config          create or modify configuration file
 --list-vols       list managed volumes
 --manage-vols     manage all volumes
 --unmanage-vols   unmanage all volumes
 --list-snaps      list managed snapshots
 --create-snaps    create a snapshot if it is managed
 --remove-snaps    remove a snapshot if it is managed
 --version         show program's version number and exit
 --help            display this help and exit

The first thing you will need to do is let autosnap manage the volumes in a region:

autosnap --manage-vols

This command will simply add some tags to help with the management of the volumes.  Next, you can take a look and see what volumes got  picked up and are now being managed by autosnap

autosnap --list-vols

To take a snapshot of all the volumes that are being managed:

autosnap --create-snaps

And you can take a look at your snapshots:

autosnap --list-snaps

Just as easily you can remove snapshots older than the specified expiration date:

autosnap --remove-snaps

There are some other useful features and flags but the above commands are pretty much the meat and potatoes of how to use this tool.

Conclusion

I know this is not going to be super useful for everybody but it is definitely a nice tool to have if you work with AWS volumes and snapshots on a semi regular basis.  As I said, this can easily be improved so I’d love to hear what kinds of things to add or change to make this a great tool.  I hope to start working on some more interesting projects and tools in the near future, so stay tuned.

Patching CVE-6271 (shellshock) with Chef

If you haven’t heard the news yet, a recently disclosed vulnerability has been released that exploits environmental variables in bash.  This has some far reaching implications because bash is so widespread and runs on many different types of devices, for example network gear, routers, switches, firewalls, etc.  If that doesn’t scare you then you probably don’t need to finish reading this article.  For more information you can check out this article that helped to break the story.

I have been seeing a lot “OMG the world is on fire, patch patch patch!” posts and sentiment surrounding this recently disclosed vulnerability, but basically have not seen anybody taking the time to explain how to patch and fix this issue.  It is not a difficult fix but it might not be obvious to the more casual user or those who do not have a sysadmin or security background.

Debian/Ubuntu:

Use the following commands to search through your installed packages for the correct package release.  You can check the Ubuntu USN for versions.

dpkg -l | grep '^ii' or
dpkg-query --show bash

If you are on Ubuntu 12.04 you will need update to the following version:

bash    4.2-2ubuntu2.3

If you are on Ubuntu 13.10, and have this package (or below), you are vulnerable.  Update to 14.04!

 bash 4.2-5ubuntu3

If you are on Ubuntu 14.04, be sure to update to the most recently patched patch.

bash 4.3-7ubuntu1.3

Luckily, the update process is pretty straight forward.

apt-get update
apt-get --only-upgrade install bash

If you have the luxury of managing your environment with some sort of automation or configuration management tool (get this in place if you don’t have it already!) then this process can be managed quite efficiently.

It is easy to check if a server that is being managed by Chef has the vulnerability by using knife search:

knife search node 'bash_shellshock_vulnerable:true'

From here you could createa a recipe to patch the servers or fix each one by hand.  Another cool trick is that you can blast out the update to Debian based servers with the following command:

knife ssh 'platform_family:debian' 'sudo apt-get update; sudo apt-get install -y bash'; knife ssh 'platform_family:redhat' 'sudo yum -y install bash'

This will iterate over every server in your Chef server environment that is in the Debain family (including Ubuntu) or RHEL family (including CentOS) and update the server packages so that the latest patched bash version gets pulled down and then gets updated to the latest version.

You may need to tweak the syntax a little, -x to override the ssh user and -i to feed an identity file.  This is so much faster than manually installing the update on all your servers or even fiddling around with a tool like Fabric, which is still better than nothing.

One caveat to note:  If you are not on an LTS version of Ubuntu, you will need to upgrade your server(s) first to an LTS, either 12.04 or 14.04 to qualify for this patch.  Ubuntu 13.10 went out of support in August which was about a month ago as per the time of this writing so you will want to get your OS up date.

One more thing:  The early patches to address this vulnerability did not entirely fix the issue, so make sure that you have the correct patch installed.  If you patched right away there is a good chance you may still be vulnerable, so simply rerun your knife ssh command to reapply the newest patch, now that the dust is beginning to settle.

Outside of this vulnerability, it is a good idea to get your OS on an Ubuntu LTS version anyway to continue receiving critical updates for software as well as security patches for a longer duration than the normal, 6 month release cycle of the server distribution.

Transitioning from bash to zsh

oh-my-zsh

I have know about zsh for a long time now but have never really had a compelling reason to switch my default shell from bash until just recently, I have been hearing more and more people talking about how powerful and awesome zsh is.  So I thought I might as well take the dive and get started since that’s what all the cool kids seem to be doing these days.  At first I thought that changing my shell was going to be a PITA with all the customizations and idiosyncrasies that I have grown accustomed to using bash but I didn’t find that to be the case at all when switching to zsh.

First and foremost, I used a tool called oh-my-zsh to help with the transition.  If you haven’t heard about it yet, oh-my-zsh aims to be a sort of framework for zsh.  This project is a nice clean way to get started with zsh because it give you a nice set of defaults out of the the box without having to do much configuration or tweaking and I found that many of my little tricks and shortcuts were already baked in to to oh-my-zsh, along with a ton of other settings and customizations that I did not have using bash.

From their github page:

oh-my-zsh is an open source, community-driven framework for managing your ZSH configuration. It comes bundled with a ton of helpful functions, helpers, plugins, themes, and few things that make you shout…

Here are just a few of the improvement that zsh/oh-my-zsh offer:

  • Improved tab completion
  • persistent history across all shells
  • Easy to use plugin system
  • Easy to use theme system
  • Autocorrect

The most obvious difference that I have noticed is the improved, out of the box tab completion, which I think should be enough on its own to convince you!  On top of that, most of my tricks and customizations were already turned on with oh-my-zsh.  Another nice touch is that themes and plugins come along as part of the package, which is really nice for easing the transition.

So after spending an afternoon with zsh I am convinced that it is the way to go (at least for my own workfolw).  Of course there are always caveats and hiccups along the way as I’ve learned there are with pretty much everything.

Tuning up tmux

Out of the box, my tmux config uses the default shell, which happens to be bash.  So I needed to modify my ~/.tmux.conf to reflect the switch over the zsh.  It is a pretty straight forward change but is something that you will need to make note of kif you use tmux and are transitioning in to using zsh.

set-option -g default-shell /usr/bin/zsh

I am using Ubuntu 14.04, so my zsh is installed to /usr/bin/zsh.  The other thing that you will need to do is make sure you kill any stale tmux processes after updating to zsh.  I found one running in the background that was blocking me from using the new coonfig.

Goodies

There is a nice command cheat sheet for zsh.  Take some time to learn these shortcuts and aliases, they are great time savers and are very usefull.

oh-my-zsh comes bundled up with a large number of goodies.  At the time of this writing there were 135 plugins as well as a variety of themes.  You can check the plugins wiki page for descriptions for the various plugins.  To turn on a specific plugin you will need to add it to your ~/.zshrc config file.  Find the following line in your config.

plugins=(git)

and add plugins separated by spaces

plugins (git vagrant chef)

You will need to reload the config for the changes to be picked up.

source ~/.zshrc

Most themes are hosted on the wiki, but there is also a web site dedicated to displaying the various themes, which is really cool.  It does a much better job of showing differences between various themes.  You can check it out here.  Themes function in a similar way to plugins.  If you want to change your theme, edit your ~/.zshrc file and select the desired them.

ZSH_THEME="clean"

You will need to reload your config for this option as well.

source ~/.zshrc

Conclusion

If you haven’t already made the switch to zsh I recommend that you at least experiment and play around with it before you make any final decisions.  You may be set in your ways and happy with bash or any other shell that you are used to but for me, all the awesomeness changed my opinion and decide to reevaluate my biases.  If you’re worried about making the switchin, using oh-my-zsh makes the transition so painless there is practically no reason not to try it out.

This post is really just the tip of the iceberg for the capabilities of this shell, I just wanted to expose readers to all of its glory.  Zsh offers so much more power and customization than I have covered in this post and is an amazing productivity tool with little learning overhead.

Let me know if you have any awesome zsh tips or tweaks that folks should know about.

Uchiwa dashboard for Sensu

Recently the new Uchiwa dashboard redesign for Sensu was released, and it is awesome.  It’s hard to describe how much of a leap forward this most recent release is, but it finally feels like Sensu is as “complete” and polished product as other open source and commercial products that exist.  And if you haven’t heard of Sensu yet you are missing out.  As described on the website sensuapp.org. Sensu is an open source monitoring framework.  Instead of the traditional monolithic type of monitoring solutions (cough Nagios cough) that typically come to mind, the design of Sensu allows for a more more scalable and distributed approach to monitoring which hasn’t really been done before and offers a number of benefits, including  a variety of dashboards to choose from.

Sensu touts itself as a “monitoring router”, which is a much more intuitive approach to monitoring once your wrap your head around the concept and leave the monolithic idea alone.  For example, you can plug in different components to your monitoring solution very easily with Sensu, and you aren’t tied to one solution.  If you need graphing and analytics you can choose from any number of existing solutions, Graphite, hosted Graphite, DataDog, NewRelic, etc. and more importantly, if something isn’t working as well as you’d like you can simply rip it out the component that isn’t working in favor of something that fits your needs better. Meaning it adds flexibility. no more hammering square blocks in to round holes.  Sensu also offers nice scalability features, since all of the pieces are loosely coupled you don’t need to worry about scaling the entire beast, you can pick and choose which pieces to scale and when.  Sensu itself is also scalable.  Since the backbone of Sensu relies on RabbitMQ (soon to be opened up to other message queueing services), the busier it gets, simply cluster or add nodes to your RabbitMQ cluster.  Granted, RabbitMQ isn’t exactly the easiest thing to scale, but it is possible.

With its distributed nature, Sensu by default is just a monitor.  In the beginning, that meant either writing your own dashboard to communicate with Sensu server or using the default dashboard.  As the ecosystem has evolved, the default dashboard has not been able to keep up with the evolution of Sensu and the needs of those using it.

Traditionally in the monitoring world, if you are not familiar, design and usability have not exactly been high priorities with regards to dashboards, graphics and GUI’s in the majority of tools that exist.  Although that fact is changing somewhat with some of the newer cloud tools like DataDog and NewRelic, the only problem is that those solution are commercial and can become expensive.  The bane of the open source solutions, at least for me,  is how ugly the dashboards and user experiences have been (the Sensu default dashboard was an exception).  But, the latest release of Uchiwa for Sensu has really changed the game in my opinion.  It is much more modern and elegant.

We have gone from this:

Nagios dashboard

 

To this:

uchiwa dashboard

 

Which one would you rather use?  It is much easier to use and is much more elegant.  The main dashboard (pictured above) gives a nice 1,000 ft view of what is going on in your environment.  It is easy to quickly check the dashboard for any issues going on in your environment.

In addition to the home view, there is a nice checks view to get a glimpse of pretty much everything that’s going on in your environment.  Sometimes with a large number of checks it is very easy to forget what exactly is happening so this is a nice way to double check.

Uchiwa checks

 

As well, there is another similar view for checking clients.  One small but very nice piece of info here is that it will display the Sensu client version for each host.  If there are any issues with a host it is easy to tell from here.

Uchiwa clients

 

You can also drill down in to any of these hosts to get a better picture of what exactly is going on.  It will show you exactly which checks are being run for the host as well as some other very hand information.

uchiwa details

 

From this page you can even select an individual check and see exactly how it is set up and behaving.  It is easy to silence a single alert of all alerts for a client.  Just click on the sound icon in any context to silence or unsilence an alert or an entire client.  This has been handy for minimizing alert spam when doing maintenance on specific hosts.

Sensu check

 

One last handy feature is the info page.  From here you can check out some of the Sensu server info as well as Uchiwa settings.  This is also good for troubleshooting.

Info page

 

That pretty much covers the highlights of the new UI.  As I have said, I am very excited for this release because this is an awesome GUI and there are going to be some really interesting improvements and additions in the future for Uchiwa which will make it an even stronger and more compelling reason to make the switch to Sensu and Uchiwa if you haven’t already.

If you have direct questions about the post, you comment here.  Otherwise, the best place to get help with most of this stuff is probably the #sensu channel on IRC.  That’s where the majority of the project contributors hang out.  You can check out the Uchiwa code as well if you’d like over on Github.  If you ever have issues with the dashboard that is the place to go, I would suggest browsing through the issues and if you can’t find a solution then create a new issue.  Don’t hesitate to jump in to any of the discussions either.  The author is very friendly and helpful and is very quick to respond to issues.  One final helpful resource is the Sensu docs.  Make sure you are looking at the correct version of Sensu according to the documentation, there are still enough changes occurring that the docs still have some differences between them and can get new users fumbled up.