Patching CVE-6271 (shellshock) with Chef

If you haven’t heard the news yet, a recently disclosed vulnerability has been released that exploits environmental variables in bash.  This has some far reaching implications because bash is so widespread and runs on many different types of devices, for example network gear, routers, switches, firewalls, etc.  If that doesn’t scare you then you probably don’t need to finish reading this article.  For more information you can check out this article that helped to break the story.

I have been seeing a lot “OMG the world is on fire, patch patch patch!” posts and sentiment surrounding this recently disclosed vulnerability, but basically have not seen anybody taking the time to explain how to patch and fix this issue.  It is not a difficult fix but it might not be obvious to the more casual user or those who do not have a sysadmin or security background.

Debian/Ubuntu:

Use the following commands to search through your installed packages for the correct package release.  You can check the Ubuntu USN for versions.

dpkg -l | grep '^ii' or
dpkg-query --show bash

If you are on Ubuntu 12.04 you will need update to the following version:

bash    4.2-2ubuntu2.3

If you are on Ubuntu 13.10, and have this package (or below), you are vulnerable.  Update to 14.04!

 bash 4.2-5ubuntu3

If you are on Ubuntu 14.04, be sure to update to the most recently patched patch.

bash 4.3-7ubuntu1.3

Luckily, the update process is pretty straight forward.

apt-get update
apt-get --only-upgrade install bash

If you have the luxury of managing your environment with some sort of automation or configuration management tool (get this in place if you don’t have it already!) then this process can be managed quite efficiently.

It is easy to check if a server that is being managed by Chef has the vulnerability by using knife search:

knife search node 'bash_shellshock_vulnerable:true'

From here you could createa a recipe to patch the servers or fix each one by hand.  Another cool trick is that you can blast out the update to Debian based servers with the following command:

knife ssh 'platform_family:debian' 'sudo apt-get update; sudo apt-get install -y bash'; knife ssh 'platform_family:redhat' 'sudo yum -y install bash'

This will iterate over every server in your Chef server environment that is in the Debain family (including Ubuntu) or RHEL family (including CentOS) and update the server packages so that the latest patched bash version gets pulled down and then gets updated to the latest version.

You may need to tweak the syntax a little, -x to override the ssh user and -i to feed an identity file.  This is so much faster than manually installing the update on all your servers or even fiddling around with a tool like Fabric, which is still better than nothing.

One caveat to note:  If you are not on an LTS version of Ubuntu, you will need to upgrade your server(s) first to an LTS, either 12.04 or 14.04 to qualify for this patch.  Ubuntu 13.10 went out of support in August which was about a month ago as per the time of this writing so you will want to get your OS up date.

One more thing:  The early patches to address this vulnerability did not entirely fix the issue, so make sure that you have the correct patch installed.  If you patched right away there is a good chance you may still be vulnerable, so simply rerun your knife ssh command to reapply the newest patch, now that the dust is beginning to settle.

Outside of this vulnerability, it is a good idea to get your OS on an Ubuntu LTS version anyway to continue receiving critical updates for software as well as security patches for a longer duration than the normal, 6 month release cycle of the server distribution.

Read More

oh-my-zsh

Transitioning from bash to zsh

I have know about zsh for a long time now but have never really had a compelling reason to switch my default shell from bash until just recently, I have been hearing more and more people talking about how powerful and awesome zsh is.  So I thought I might as well take the dive and get started since that’s what all the cool kids seem to be doing these days.  At first I thought that changing my shell was going to be a PITA with all the customizations and idiosyncrasies that I have grown accustomed to using bash but I didn’t find that to be the case at all when switching to zsh.

First and foremost, I used a tool called oh-my-zsh to help with the transition.  If you haven’t heard about it yet, oh-my-zsh aims to be a sort of framework for zsh.  This project is a nice clean way to get started with zsh because it give you a nice set of defaults out of the the box without having to do much configuration or tweaking and I found that many of my little tricks and shortcuts were already baked in to to oh-my-zsh, along with a ton of other settings and customizations that I did not have using bash.

From their github page:

oh-my-zsh is an open source, community-driven framework for managing your ZSH configuration. It comes bundled with a ton of helpful functions, helpers, plugins, themes, and few things that make you shout…

Here are just a few of the improvement that zsh/oh-my-zsh offer:

  • Improved tab completion
  • persistent history across all shells
  • Easy to use plugin system
  • Easy to use theme system
  • Autocorrect

The most obvious difference that I have noticed is the improved, out of the box tab completion, which I think should be enough on its own to convince you!  On top of that, most of my tricks and customizations were already turned on with oh-my-zsh.  Another nice touch is that themes and plugins come along as part of the package, which is really nice for easing the transition.

So after spending an afternoon with zsh I am convinced that it is the way to go (at least for my own workfolw).  Of course there are always caveats and hiccups along the way as I’ve learned there are with pretty much everything.

Tuning up tmux

Out of the box, my tmux config uses the default shell, which happens to be bash.  So I needed to modify my ~/.tmux.conf to reflect the switch over the zsh.  It is a pretty straight forward change but is something that you will need to make note of kif you use tmux and are transitioning in to using zsh.

set-option -g default-shell /usr/bin/zsh

I am using Ubuntu 14.04, so my zsh is installed to /usr/bin/zsh.  The other thing that you will need to do is make sure you kill any stale tmux processes after updating to zsh.  I found one running in the background that was blocking me from using the new coonfig.

Goodies

There is a nice command cheat sheet for zsh.  Take some time to learn these shortcuts and aliases, they are great time savers and are very usefull.

oh-my-zsh comes bundled up with a large number of goodies.  At the time of this writing there were 135 plugins as well as a variety of themes.  You can check the plugins wiki page for descriptions for the various plugins.  To turn on a specific plugin you will need to add it to your ~/.zshrc config file.  Find the following line in your config.

plugins=(git)

and add plugins separated by spaces

plugins (git vagrant chef)

You will need to reload the config for the changes to be picked up.

source ~/.zshrc

Most themes are hosted on the wiki, but there is also a web site dedicated to displaying the various themes, which is really cool.  It does a much better job of showing differences between various themes.  You can check it out here.  Themes function in a similar way to plugins.  If you want to change your theme, edit your ~/.zshrc file and select the desired them.

ZSH_THEME="clean"

You will need to reload your config for this option as well.

source ~/.zshrc

Conclusion

If you haven’t already made the switch to zsh I recommend that you at least experiment and play around with it before you make any final decisions.  You may be set in your ways and happy with bash or any other shell that you are used to but for me, all the awesomeness changed my opinion and decide to reevaluate my biases.  If you’re worried about making the switchin, using oh-my-zsh makes the transition so painless there is practically no reason not to try it out.

This post is really just the tip of the iceberg for the capabilities of this shell, I just wanted to expose readers to all of its glory.  Zsh offers so much more power and customization than I have covered in this post and is an amazing productivity tool with little learning overhead.

Let me know if you have any awesome zsh tips or tweaks that folks should know about.

Read More

Uchiwa dashboard for Sensu

Recently the new Uchiwa dashboard redesign for Sensu was released, and it is awesome.  It’s hard to describe how much of a leap forward this most recent release is, but it finally feels like Sensu is as “complete” and polished product as other open source and commercial products that exist.  And if you haven’t heard of Sensu yet you are missing out.  As described on the website sensuapp.org. Sensu is an open source monitoring framework.  Instead of the traditional monolithic type of monitoring solutions (cough Nagios cough) that typically come to mind, the design of Sensu allows for a more more scalable and distributed approach to monitoring which hasn’t really been done before and offers a number of benefits, including  a variety of dashboards to choose from.

Sensu touts itself as a “monitoring router”, which is a much more intuitive approach to monitoring once your wrap your head around the concept and leave the monolithic idea alone.  For example, you can plug in different components to your monitoring solution very easily with Sensu, and you aren’t tied to one solution.  If you need graphing and analytics you can choose from any number of existing solutions, Graphite, hosted Graphite, DataDog, NewRelic, etc. and more importantly, if something isn’t working as well as you’d like you can simply rip it out the component that isn’t working in favor of something that fits your needs better. Meaning it adds flexibility. no more hammering square blocks in to round holes.  Sensu also offers nice scalability features, since all of the pieces are loosely coupled you don’t need to worry about scaling the entire beast, you can pick and choose which pieces to scale and when.  Sensu itself is also scalable.  Since the backbone of Sensu relies on RabbitMQ (soon to be opened up to other message queueing services), the busier it gets, simply cluster or add nodes to your RabbitMQ cluster.  Granted, RabbitMQ isn’t exactly the easiest thing to scale, but it is possible.

With its distributed nature, Sensu by default is just a monitor.  In the beginning, that meant either writing your own dashboard to communicate with Sensu server or using the default dashboard.  As the ecosystem has evolved, the default dashboard has not been able to keep up with the evolution of Sensu and the needs of those using it.

Traditionally in the monitoring world, if you are not familiar, design and usability have not exactly been high priorities with regards to dashboards, graphics and GUI’s in the majority of tools that exist.  Although that fact is changing somewhat with some of the newer cloud tools like DataDog and NewRelic, the only problem is that those solution are commercial and can become expensive.  The bane of the open source solutions, at least for me,  is how ugly the dashboards and user experiences have been (the Sensu default dashboard was an exception).  But, the latest release of Uchiwa for Sensu has really changed the game in my opinion.  It is much more modern and elegant.

We have gone from this:

Nagios dashboard

 

To this:

uchiwa dashboard

 

Which one would you rather use?  It is much easier to use and is much more elegant.  The main dashboard (pictured above) gives a nice 1,000 ft view of what is going on in your environment.  It is easy to quickly check the dashboard for any issues going on in your environment.

In addition to the home view, there is a nice checks view to get a glimpse of pretty much everything that’s going on in your environment.  Sometimes with a large number of checks it is very easy to forget what exactly is happening so this is a nice way to double check.

Uchiwa checks

 

As well, there is another similar view for checking clients.  One small but very nice piece of info here is that it will display the Sensu client version for each host.  If there are any issues with a host it is easy to tell from here.

Uchiwa clients

 

You can also drill down in to any of these hosts to get a better picture of what exactly is going on.  It will show you exactly which checks are being run for the host as well as some other very hand information.

uchiwa details

 

From this page you can even select an individual check and see exactly how it is set up and behaving.  It is easy to silence a single alert of all alerts for a client.  Just click on the sound icon in any context to silence or unsilence an alert or an entire client.  This has been handy for minimizing alert spam when doing maintenance on specific hosts.

Sensu check

 

One last handy feature is the info page.  From here you can check out some of the Sensu server info as well as Uchiwa settings.  This is also good for troubleshooting.

Info page

 

That pretty much covers the highlights of the new UI.  As I have said, I am very excited for this release because this is an awesome GUI and there are going to be some really interesting improvements and additions in the future for Uchiwa which will make it an even stronger and more compelling reason to make the switch to Sensu and Uchiwa if you haven’t already.

If you have direct questions about the post, you comment here.  Otherwise, the best place to get help with most of this stuff is probably the #sensu channel on IRC.  That’s where the majority of the project contributors hang out.  You can check out the Uchiwa code as well if you’d like over on Github.  If you ever have issues with the dashboard that is the place to go, I would suggest browsing through the issues and if you can’t find a solution then create a new issue.  Don’t hesitate to jump in to any of the discussions either.  The author is very friendly and helpful and is very quick to respond to issues.  One final helpful resource is the Sensu docs.  Make sure you are looking at the correct version of Sensu according to the documentation, there are still enough changes occurring that the docs still have some differences between them and can get new users fumbled up.

Read More

Recover a Grafana dashboard

Grafana uses Elasticsearch (optionally) to store its dashboards.  If you ever migrate your Graphite/Grafana servers or simply need to grab all of your dashboards from the old server then you will likely be looking for them in Elasticsearch.  Luckily, migrating to a new server and moving the dashboards is and uncomplicated and easy to do process.  In this post I will walk through the process of moving Grafana dashboards between servers.

This guide assumes that Elasticsearch has been installed on both old and new servers.  The first thing to look at is your current Grafana config.  This is the file that you probably used to set up your Grafana environment originally.  This file resides in the directory that you placed your Grafana server files in to, and is named config.js.  There is a block inside this config file that tells Elasticsearch where to save dashboards, which by default is called “grafana-dashboards” which should look something like this:

/**
 * Elasticsearch index for storing dashboards
 *
 */
 grafana_index: "grafana-dash-orig",

Now, if you still have access to the old server it is merely a matter of copying this Elasticsearch directory that houses your Grafana dashboard over to the new location. By default on an Ubuntu installation the Elasticsearch data files get placed in to the following path:

/var/lib/elasticsearch/elasticsearch/nodes/0/indices/grafana-dashboards

Replace “0” with the node if this is a clustered Elasticsearch instance, otherwise you should see the grafana-dashboard directory.  Simply copy this directory over to the new server with rsync or scp and put it in a temporary location for the time being (like /tmp for example).  Rename the existing grafana-dashboards directory to something different, in case there are some newly created dashboards that you would like to retrieve.  Then move the original dashboards (from the old server) from the /tmp directory in to the above path, renaming it to grafana-dashboard.  The final step is to chown the directory and its contents.  The steps for accomplishing this task are similar to the following.

On the old host:

cd /var/lib/elasticsearch/elasticsearch/nodes/0/indices/
rsync -avP -e ssh grafana-dashbaords/ user@remote_host:/tmp/

On the new host:

cd /var/lib/elasticsearch/elasticsearch/nodes/0/indices/
mv grafana-dashboards grafana-dash-orig
mv /tmp/grafana-dashboards ./grafana-dashboards
chown -R elasticsearch:elasticsearch grafana-dashboards

You don’t even need to restart the webserver or Elasticsearch for the old dashboards to show up.  Just reload the page and bam.   Dashboards.

grafana dashboard

Read More

Cloud Backup Tutorial

I have been knee deep in backups for the past few weeks, but I think I can finally see light at the end of the tunnel.  What looked like a simple enough idea to implement turned out to be a much more complicated task to accomplish.  I don’t know why, but there seems to be practically no information at all out there covering this topic.  Maybe it’s just because backups suck?  Either way they are extremely important to the vitality of a company and without a workable set of data, you are screwed if something happens to your data.  So today I am going to write about managing cloud data and cloud backups and hopefully shine some light on this seemingly foreign topic.

Part of being a cloud based company means dealing with cloud based storage.  Some of the terms involved are slightly different than the standard backup and storage terminology.  Things like buckets, object based storage, S3, GCS, boto all come to mind when dealing with cloud based storage and backups.  It turns out that there are a handful of tools out there for dealing with our storage requirements which I will be discussing today.

The Google and Amazon API’s are nice because they allow for creating third party tools to manage the storage, outside of their official and standard tools.  In my journey to find a solution I ran across several, workable tools that I would like to mention.  The end goal of this project was to sync a massive amount of files and data from S3 storage to GCS.  I found that the following tools all provided at least some of my requirements and each has its own set of uses.  They are included here in no real order:

  • duplicity/duply – This tool works with S3 for small scale storage.
  • Rclone – This one looks very promising, supports S3 to GCS sync.
  • aws-cli – The official command line tool supported by AWS.

S3cmd – This was the first tool that came close to doing what I wanted.  It’s a really nice tool for smallish amounts of files and has some really nice and handy features and is capable of syncing S3 buckets.  It is equipped with a number of nice and handy options but unfortunately the way it is designed does not allow for reading and writing a large number of files.  It is a great tool for smaller sets of data.

s3s3mirror – This is an extremely fast copy tool written in Java and hosted on Github.  This thing is awesome at copying data quickly.  This tool was able to copy about 6 million files in a little over 5 hours the other day.  One extremely nice feature of this tool is that it has an intelligent sync built in so it knows which files have been copied over.  Even better, this tool is even faster when it is running reads only.  So once your initial sync has completed, additional syncs are blazing fast.

This is a jar file so you will need to have Java installed on your system to run it.

sudo apt-get install openjdk-jre-headless

Then you will need to grab the code from Github.

git clone [email protected]:cobbzilla/s3s3mirror.git

And to run it.

./s3s3mirror.sh first-bucket/ second-bucket/

That’s pretty much it.  There are some handy flags but this is the main command. There is an -r flag for changing the retry count, a -v flag for verbosity and troubleshooting as well as a –dry-run flag to see what will happen.

The only down side of this tool is that it only seems to be supported for S3 at this point – although the source is posted to Github so could easily be adapted to work for GCS, which is something I am actually looking at doing.

Gsutil – The Python command line tool that was created and developed by Google.  This is the most powerful tool that I have found so far.  It has a ton of command line options, the ability to communicate with other cloud providers, open source and is under active development and maintenance.  Gsutil is scriptable and has code for dealing with failures – it can retry failed copies as well as resumable transfers, and has intelligence for checking which files and directories already exist for scenarios where synchronizing buckets is important.

The first step to using gsutil after installation is to run through the configuration with the gsutil config command.  Follow the instructions to link gsutil with your account.  After the initial configuration has been run you can modify or update all the gsutil goodies by editing the config file – which lives in ~/.boto by default.  One config change that is worth mentioning is the parallel_process_count and parallel_thread_count.  These control how much data can get shoved through gsutil at once – so on really beefy boxes you can crank this number up quite a bit higher than its default.  To utilize the parallel processing you simply need to set the -m flag on your gsutil command.

gsutil -m sync/cp gs://bucket-name

One very nice feature of gsutil is that it has built in functionality to interact with AWS and S3 storage.  To enable  this functionality you need to copy your AWS access_id and your secret_access_key in to your ~/.boto config file.  After that, you can test out the updated config to look at your buckets that live on S3.

gsutil ls s3://

So your final command to sync an S3 bucket to Google Cloud would look similar to the following,

gsutil -m cp -R s3://bucket-name gs://bucket-name

Notice the -R flag, which sets the copy to be a recursive copy instead everything in one bucket to the other, instead of a single layer copy.

There is one final tool that I’d like to cover, which isn’t a command line tool but turns out to be incredibly useful for copying large sets of data from S3 in to GCS, which is the GCS Online Import tool.  Follow the link and go fill out the interest form listed and after a little while you should hear from somebody from Google about setting up and using your new account.  It is free to use and the support is very good. Once you have been approved for using this tool you will need to provide a little bit of information for setting up sync jobs, your AWS ID and key, as well as allowing your Google account to sync the data.  But it is all very straight forward and if you have any questions the support is excellent.  This tool saved me from having to manually sync my S3 storage to GCS manually, which would have taken at least 7 days (and that was even with a monster EC2 instance).

Ultimately, the tools you choose will depend on your specific requirements.  I ended up using a combination of s3s3mirror, AWS bucket versioning, the Google cloud import tool and gsutil.  But my requirements are probably different from the next person and each backup scenario is unique so a combination of these various tools allows for flexibility to accomplish pretty much all scenarios.  Let me know if you have any questions or know of some other tools that I have failed to mention here.  Cloud backups are an interesting and unique challenge that I am still mastering so I would love to hear any tips and tricks you may have.

Read More