Cloud Backup Tutorial

I have been knee deep in backups for the past few weeks, but I think I can finally see light at the end of the tunnel.  What looked like a simple enough idea to implement turned out to be a much more complicated task to accomplish.  I don’t know why, but there seems to be practically no information at all out there covering this topic.  Maybe it’s just because backups suck?  Either way they are extremely important to the vitality of a company and without a workable set of data, you are screwed if something happens to your data.  So today I am going to write about managing cloud data and cloud backups and hopefully shine some light on this seemingly foreign topic.

Part of being a cloud based company means dealing with cloud based storage.  Some of the terms involved are slightly different than the standard backup and storage terminology.  Things like buckets, object based storage, S3, GCS, boto all come to mind when dealing with cloud based storage and backups.  It turns out that there are a handful of tools out there for dealing with our storage requirements which I will be discussing today.

The Google and Amazon API’s are nice because they allow for creating third party tools to manage the storage, outside of their official and standard tools.  In my journey to find a solution I ran across several, workable tools that I would like to mention.  The end goal of this project was to sync a massive amount of files and data from S3 storage to GCS.  I found that the following tools all provided at least some of my requirements and each has its own set of uses.  They are included here in no real order:

  • duplicity/duply – This tool works with S3 for small scale storage.
  • Rclone – This one looks very promising, supports S3 to GCS sync.
  • aws-cli – The official command line tool supported by AWS.

S3cmd – This was the first tool that came close to doing what I wanted.  It’s a really nice tool for smallish amounts of files and has some really nice and handy features and is capable of syncing S3 buckets.  It is equipped with a number of nice and handy options but unfortunately the way it is designed does not allow for reading and writing a large number of files.  It is a great tool for smaller sets of data.

s3s3mirror – This is an extremely fast copy tool written in Java and hosted on Github.  This thing is awesome at copying data quickly.  This tool was able to copy about 6 million files in a little over 5 hours the other day.  One extremely nice feature of this tool is that it has an intelligent sync built in so it knows which files have been copied over.  Even better, this tool is even faster when it is running reads only.  So once your initial sync has completed, additional syncs are blazing fast.

This is a jar file so you will need to have Java installed on your system to run it.

sudo apt-get install openjdk-jre-headless

Then you will need to grab the code from Github.

git clone [email protected]:cobbzilla/s3s3mirror.git

And to run it.

./s3s3mirror.sh first-bucket/ second-bucket/

That’s pretty much it.  There are some handy flags but this is the main command. There is an -r flag for changing the retry count, a -v flag for verbosity and troubleshooting as well as a –dry-run flag to see what will happen.

The only down side of this tool is that it only seems to be supported for S3 at this point – although the source is posted to Github so could easily be adapted to work for GCS, which is something I am actually looking at doing.

Gsutil – The Python command line tool that was created and developed by Google.  This is the most powerful tool that I have found so far.  It has a ton of command line options, the ability to communicate with other cloud providers, open source and is under active development and maintenance.  Gsutil is scriptable and has code for dealing with failures – it can retry failed copies as well as resumable transfers, and has intelligence for checking which files and directories already exist for scenarios where synchronizing buckets is important.

The first step to using gsutil after installation is to run through the configuration with the gsutil config command.  Follow the instructions to link gsutil with your account.  After the initial configuration has been run you can modify or update all the gsutil goodies by editing the config file – which lives in ~/.boto by default.  One config change that is worth mentioning is the parallel_process_count and parallel_thread_count.  These control how much data can get shoved through gsutil at once – so on really beefy boxes you can crank this number up quite a bit higher than its default.  To utilize the parallel processing you simply need to set the -m flag on your gsutil command.

gsutil -m sync/cp gs://bucket-name

One very nice feature of gsutil is that it has built in functionality to interact with AWS and S3 storage.  To enable  this functionality you need to copy your AWS access_id and your secret_access_key in to your ~/.boto config file.  After that, you can test out the updated config to look at your buckets that live on S3.

gsutil ls s3://

So your final command to sync an S3 bucket to Google Cloud would look similar to the following,

gsutil -m cp -R s3://bucket-name gs://bucket-name

Notice the -R flag, which sets the copy to be a recursive copy instead everything in one bucket to the other, instead of a single layer copy.

There is one final tool that I’d like to cover, which isn’t a command line tool but turns out to be incredibly useful for copying large sets of data from S3 in to GCS, which is the GCS Online Import tool.  Follow the link and go fill out the interest form listed and after a little while you should hear from somebody from Google about setting up and using your new account.  It is free to use and the support is very good. Once you have been approved for using this tool you will need to provide a little bit of information for setting up sync jobs, your AWS ID and key, as well as allowing your Google account to sync the data.  But it is all very straight forward and if you have any questions the support is excellent.  This tool saved me from having to manually sync my S3 storage to GCS manually, which would have taken at least 7 days (and that was even with a monster EC2 instance).

Ultimately, the tools you choose will depend on your specific requirements.  I ended up using a combination of s3s3mirror, AWS bucket versioning, the Google cloud import tool and gsutil.  But my requirements are probably different from the next person and each backup scenario is unique so a combination of these various tools allows for flexibility to accomplish pretty much all scenarios.  Let me know if you have any questions or know of some other tools that I have failed to mention here.  Cloud backups are an interesting and unique challenge that I am still mastering so I would love to hear any tips and tricks you may have.

Read More

Chef data bags with Test Kitchen

As a step towards integrating your Chef cookbooks with Jenkins CI and your testing/release pipeline it is important to make sure that local changes pass unit and integration tests before being accepted and committed into version control.  For example, when running test kitchen it is important to fully simulate what data bags and encrypted data bags are doing on a local box for many tests to pass correctly.  So, today I would like to focus on a stumbling block towards Jenkins and integration testing that I ran in to recently.  There are a few lessons that I learned along the way that I would like to share to help clarify things a little bit because there wasn’t much good info out there on how to do this.

First, I need to give credit where it is due.  This post was a great resource in my journey to find a solution to my test kitchen data bag issue.

The largest roadblock I found along the way was that the version of test kitchen I was using was being shipped with chef-solo as the primary driver.  There has been a lot of discussion around this topic lately and (from what I understand) has pretty much become the general consensus within the Chef community that chef-solo should be replaced by chef-zero.  There are a number of advantages to using chef-zero instead of chef-solo, including a lesson I learned the hard way, which is that chef-zero has the ability to act as a stand alone Chef server – unlocking the ability to store data bags and encrypted data bags without having to do any sort of wacky hacking to get Chef to compile and converge correctly.

There was a good post written recently that expounds more on the benefits of using chef-zero instead of chef-solo.  It is here, and is definitely worth the read if you are interested in learning more about the benefits of chef-zero.

So with that knowledge in mind, here is what a newly updated sample .kitchen.yml file might look like:

--- 
driver: 
 name: vagrant 
 
provisioner: 
 name: chef_zero 
 
platforms: 
 - name: ubuntu-13.10-i386 
 - name: centos-6.4-i386 
 
suites: 
 - name: default 
 data_bags_path: "test/integration/data_bags" 
 run_list: 
 - recipe[recipe-to-test] 
 attributes:

It’s a pretty straight forward config.  The biggest change that you will notice in this config is that instead of using chef-solo as the provisioner it has been changed to chef-zero – I now know that it makes all the difference in the world.  The next big change to observe is the data_bags_path in the suites section.  This bit of configuration basically tells the Chef provisioner to go look at the specified file path when chef-zero spins up and use that to store data bag, encrypted data bag or other information that potentially would live on the Chef server that client’s would use.

So in the test/integration/data_bags directory I have a directory and json file inside that directory for the specific data I am interested in, called sensu/ssl.json.  This file essentially contains the same information that is stored on the Chef server about the ssl certificates used for live hosts in the production environment, just mirrored into a sandbox/integration testing environment.

If you’re interested, here is a sample of what the  ssl.json file might look like:

{ 
 "id": "ssl", 
 "server": { 
 "key": "-----BEGIN RSA PRIVATE KEY-----gM
 "cert": "-----BEGIN CERTIFICATE-----gM
 "cacert": "-----BEGIN CERTIFICATE-----gM
 }, 
 "client": { 
 "key": "-----BEGIN RSA PRIVATE KEY-----gM
 "cert": "-----BEGIN CERTIFICATE-----gM
 } 
}

Note that the “id” is “ssl”.  As far as I know the file name must match up to the id when you are creating this json file.

Now you should be able to create and converge your test recipe with test kitchen:

kitchen create ubuntu
kitchen converge ubuntu

If you have any difficulty, let me know.  I tried to be thorough in this write up but could have accidentally skipped important information.  The main keys or takeaways though should be 1) use chef-zero wherever possible and 2) make sure you have your data bag paths and files created correctly and referenced correctly in your .kitchen.yml file.  Finally, if you are still having issues, make sure you have triple checked the spelling and json syntax of your paths and configs.

Read More

Review: Webmin Administrator’s Cookbook

webmin cookbookI just recently finished reading the Webmin Administrator’s Cookbook and thought I would share some of my thoughts and opinions about the book.  While I don’t typically review books on the blog I thought this would be a good opportunity to discuss a nice book.  This book is written by a very knowledgeable and credible author – Michal Karzynksi.  His background includes over a decade of experience as a developer in various programming languages as well as a scientific research background.

This book isa good read for everyone from seasoned veterans and professionals all the way down to aspiring and freshly minted admins.

The book itself covers a broad, inclusive set of topics, including logging, user management, backups, web server administration and many others.  The basic theme of the book uses the Webmin tool as a sort of framework to discuss and cover various administrative topics and tasks within the Webmin tool.  From their website, Webmin is described as follows:

Webmin is a web-based interface for system administration for Unix. Using any modern web browser, you can setup user accounts, Apache, DNS, file sharing and much more.

This works out to be a perfect tool for aspiring sysadmins because it really does a nice job of cloaking a lot of the nitty gritty complexity and detail that can be overwhelming and confusing for new admins or users that are new or unfamiliar to the concepts and tooling that Webmin covers.  By using Webmin, one can learn about a large number of interesting topics without having to worry about how to type in all of the commands or how to install/configure the tools that come bundled up in Webmin.  This allows users to really increase their productivity.  Couple the Webmin tool with a cookbook of nice concrete examples and you have a great recipe for learning how to use a powerful tool correctly.

Wrapping such a broad spectrum of topics and tools into a web based tool can be a complicated.  But used as a reference material this book does a great job of making everything clear with good examples both of explaining how everything works together, as well as pictorial examples that really do a nice job of tying the written concepts together with concrete, real world usage.  Now is also a good time to mention that this book follows a nice pattern of organizing topics.  From the outset, the book starts with the more basic administrative topics and principles, covering each topic thoroughly with good description and solid examples.  The book progresses quite nicely through the different topics and eventually gets into and covers some of the more obscure topics.

The Webmin Administrator’s Cookbook does a nice job of combining many complex system administration topics into a nice, easy to follow and read reference guide that can be utilized by all different levels of Linux and administrative experience.  If you use Webmin in any capacity at all, this book would be a great reference and guide to help you be more productive in your day to day with this tool.

You can find more information about the book here.  While you are at it, check out the author, Michal Karzynski’s blog for more interesting and useful tips – http://michal.karzynski.pl.

Read More

Leveraging Nagios Plugins with Chef and Sensu

Setting up Nagios plugins to run in a Sensu and Chef managed environment is straightforward and uncomplicated. For example, I recently have been interested in monitoring SSL certificate date expiration and it just so happens that the Nagios check_http plugin does exactly what I’m looking for.

The integration between Sensu and the Nagios plugins is very nice.  For convenience in our Sensu environment, we like to put the additional Nagios plugins on to all of the systems we monitor because the footprint is negligible and it allows for some nice flexibility of services and checks to monitor should an additional service get added to a server in the future that we hadn’t anticipated.  For the amount of effort it takes to get the checks onto the server and to get working, adding the Nagios plugins is totally worth the effort.

The first step is to add the Nagios plugins to your Chef recipe.  I am using a generic Chef recipe for my Sensu clients that takes care of some of the more tedious tasks including downloading the appropriate scripts and checks for the clients to run as well as some other dependencies and items that Sensu likes to have.  Luckily there is a public Debian package available for installing the Nagios plugins so it easy to add them.  Just add this snippet into your Chef recipe for Sensu clients:

apt_package "nagios-plugins" do 
 action :install 
end

After you run your next chef-client job you will have access to a variety of checks provided by the Nagios plugins package as illustrated below.

nagios checks

There are a number of examples available but to run the check_http for cert expiration by hand you can run this command:

/usr/lib/nagios/plugins/check_http -H <sitename> -C 30,10

Where <sitename> is the URL of the website you would like to check.  Now that we are able to run this check manually, go ahead and roll that in to your Chef recipe for Sensu.  An example of this might look similar to the following:

sensu_check "check_web" do 
  command "/usr/lib/nagios/plugins/check_http -H localhost -C 30,10" 
  handlers ["pagerduty", "slack"] 
  subscribers ["core"] 
  interval 60
  standalone true
  additional(:notification => "Certificate will expire soon", :occurrences => 5) 
end

You may not want to run this check on every host so it may be a good idea to run this check as a stand alone check.  It is simple enough to add this snippet in to any recipe and tack on the “standalone true” attribute to the sensu_check resource.  I have an example of what this standalone attribute looks like in the example above for reference.

Adding in Nagios plugins gives you a very nice set of additional tools to add to your monitoring arsenal for not that much effort.  You never know when something from the Nagios plugins might come in handy so I suggest you try them out.  There are many other uses for the Nagios plugins so I suggest taking a look.

Read More

Introduction to Grafana

Grafana is a beautiful dashboard for displaying various Graphite metrics through a web browser.  Grafana is nice because it is simple to set up and maintain and is easy to use and displays metrics in a very nice Kibana like display style.  I would like to walk readers through the basics of this tool because although it is a very new project (beginning of 2014) it has an enormous amount of potential and I honestly believe that there is enough functionality to put it into production.

The guys over at Hosted Graphite have just recently released hosted Grafana as part of their standard plan.  If you are interested you can check it out over at Hosted Graphite.

One very nice feature of this product offering is that you do not need to worry about any of the behind the scenes details or intricacies of how all of the Graphite components work together.  You just click a few buttons and you are ready to go.  Highly recommended if you just need something that works, go check it out.

Anyway, Grafana gives you the ability to bolt on all kinds of bells and whistles to your graphs.  For example, there is a nice little node.js tool called statsd written by the guys over at etsy that sort of expands the capabilities of Graphite.  And since it hooks right in with Graphite, it makes it very simple to represent and output the various metrics into Grafana.

If you do choose to roll your own Grafana solution, there are a few gotchas that I was not aware of that I’d like to cover.  The first, which should seem easy enough is that you need to run but Graphite and Grafana simultaneously.  So that means you either need to create two virtual hosts depending on which web server you choose to use or you need to to create to different port bindings.  The Grafana documentation has a few examples of how to create new port binding, which uses Apache as the web server but it is pretty easy to find examples of configs on Github and other sites if you are interested in using nginx as your web server instead.  The important thing to remember is that you will want to have two sites listed in your sites-enabled directory inside of the apache directory.  One for Graphite (which I moved to port 8080 for simplicity) and one for Grafana (which I stuck on port 80).

If you choose to use authentication there are a few extra headers that need to be written as well so that Grafana is accessible correclty.  This is easy to add to either your nginx or you apache config.

Here is the adapted Apache configuration necessary to add in authentication, where website name is the globally reachable DNS name of your webserver:

Header set Access-Control-Allow-Origin "http://WEBSITENAME"
Header set Access-Control-Allow-Methods "GET, OPTIONS"
Header set Access-Control-Allow-Headers "origin, authorization, accept"
Header set Access-Control-Allow-Credentials true

<Location />
  AuthName "graphs restricted"
  AuthType Basic
  AuthUserFile /etc/apache2/htpasswd
  <LimitExcept OPTIONS>
    require valid-user
  </LimitExcept>
</Location>

That is almost the entire configuration that I have for the Grafana webserver.  You will need to download and install an Apache lib tool to generate a hash to use for you AuthUserFile, directions can be pieced together from this.  The Graphite configuration is only a little bit more complex and was created by Chef, so I will go ahead and post that here as well:

<VirtualHost *:8080>
  ServerName SERVER
  ServerAdmin [email protected]
  DocumentRoot "/opt/graphite/webapp"
  ErrorLog /opt/graphite/storage/log/webapp/error.log
  CustomLog /opt/graphite/storage/log/webapp/access.log common

  Header set Access-Control-Allow-Origin "*"
  Header set Access-Control-Allow-Methods "GET, OPTIONS"
  Header set Access-Control-Allow-Headers "origin, authorization, accept"

  WSGIScriptAlias / /opt/graphite/conf/wsgi.py

  <Directory /opt/graphite>
  <Files wsgi.py>
    Require all granted
  </Files>
  </Directory>

  <Location "/content/">
    SetHandler None
  </Location>

  <Location "/media/">
    SetHandler None
  </Location>

 # NOTE: In order for the django admin site media to work you
 # must change @DJANGO_ROOT@ to be the path to your django
 # installation, which is probably something like:
 # /usr/lib/python2.6/site-packages/django
 Alias /media/ "@DJANGO_ROOT@/contrib/admin/media/"

</VirtualHost>

Again pretty straight forward.  If you choose to add authentication, it is done in a similar way.

The only other step is to add your metrics into Graphite.  I am using Sensu, which I have written a bit about and plan to write more about in the future.  Reference some of those writings to get an idea for metric gather and if you have any question let me know.  I will be writing a post in the near future about gathering and aggregating metrics with Sensu.  For now I will just assume that you already have a way to collect metrics.

Once you have everything set up you should be able to start playing around with the Grafana GUI.  After you have your configurations ironed out pretty much everything else is done through the GUI which is nice.

To illustrate the power of Grafana, here are a few example dashboards I have built recently:

grafana dashboard graph
Memory Used
grafana dashboard graph
CPU usage
grafana dashboard graph
Disk space used

If you want to check out the Grafana project you can find more information either on their Github page or their website.  The docs page on the main website is a great resource as well as the IRC channel.  In fact, the IRC channel is probably the most preferable place to go for help because it isn’t overcrowded and the creator of Grafana is in there quite a bit.

Read More