Category Archives: Command Line

Chef data bags with Test Kitchen

As a step towards integrating your Chef cookbooks with Jenkins CI and your testing/release pipeline it is important to make sure that local changes pass unit and integration tests before being accepted and committed into version control.  For example, when running test kitchen it is important to fully simulate what data bags and encrypted data bags are doing on a local box for many tests to pass correctly.  So, today I would like to focus on a stumbling block towards Jenkins and integration testing that I ran in to recently.  There are a few lessons that I learned along the way that I would like to share to help clarify things a little bit because there wasn’t much good info out there on how to do this.

First, I need to give credit where it is due.  This post was a great resource in my journey to find a solution to my test kitchen data bag issue.

The largest roadblock I found along the way was that the version of test kitchen I was using was being shipped with chef-solo as the primary driver.  There has been a lot of discussion around this topic lately and (from what I understand) has pretty much become the general consensus within the Chef community that chef-solo should be replaced by chef-zero.  There are a number of advantages to using chef-zero instead of chef-solo, including a lesson I learned the hard way, which is that chef-zero has the ability to act as a stand alone Chef server – unlocking the ability to store data bags and encrypted data bags without having to do any sort of wacky hacking to get Chef to compile and converge correctly.

There was a good post written recently that expounds more on the benefits of using chef-zero instead of chef-solo.  It is here, and is definitely worth the read if you are interested in learning more about the benefits of chef-zero.

So with that knowledge in mind, here is what a newly updated sample .kitchen.yml file might look like:

--- 
driver: 
 name: vagrant 
 
provisioner: 
 name: chef_zero 
 
platforms: 
 - name: ubuntu-13.10-i386 
 - name: centos-6.4-i386 
 
suites: 
 - name: default 
 data_bags_path: "test/integration/data_bags" 
 run_list: 
 - recipe[recipe-to-test] 
 attributes:

It’s a pretty straight forward config.  The biggest change that you will notice in this config is that instead of using chef-solo as the provisioner it has been changed to chef-zero – I now know that it makes all the difference in the world.  The next big change to observe is the data_bags_path in the suites section.  This bit of configuration basically tells the Chef provisioner to go look at the specified file path when chef-zero spins up and use that to store data bag, encrypted data bag or other information that potentially would live on the Chef server that client’s would use.

So in the test/integration/data_bags directory I have a directory and json file inside that directory for the specific data I am interested in, called sensu/ssl.json.  This file essentially contains the same information that is stored on the Chef server about the ssl certificates used for live hosts in the production environment, just mirrored into a sandbox/integration testing environment.

If you’re interested, here is a sample of what the  ssl.json file might look like:

{ 
 "id": "ssl", 
 "server": { 
 "key": "-----BEGIN RSA PRIVATE KEY-----gM
 "cert": "-----BEGIN CERTIFICATE-----gM
 "cacert": "-----BEGIN CERTIFICATE-----gM
 }, 
 "client": { 
 "key": "-----BEGIN RSA PRIVATE KEY-----gM
 "cert": "-----BEGIN CERTIFICATE-----gM
 } 
}

Note that the “id” is “ssl”.  As far as I know the file name must match up to the id when you are creating this json file.

Now you should be able to create and converge your test recipe with test kitchen:

kitchen create ubuntu
kitchen converge ubuntu

If you have any difficulty, let me know.  I tried to be thorough in this write up but could have accidentally skipped important information.  The main keys or takeaways though should be 1) use chef-zero wherever possible and 2) make sure you have your data bag paths and files created correctly and referenced correctly in your .kitchen.yml file.  Finally, if you are still having issues, make sure you have triple checked the spelling and json syntax of your paths and configs.

Leveraging Nagios Plugins with Chef and Sensu

Setting up Nagios plugins to run in a Sensu and Chef managed environment is straightforward and uncomplicated. For example, I recently have been interested in monitoring SSL certificate date expiration and it just so happens that the Nagios check_http plugin does exactly what I’m looking for.

The integration between Sensu and the Nagios plugins is very nice.  For convenience in our Sensu environment, we like to put the additional Nagios plugins on to all of the systems we monitor because the footprint is negligible and it allows for some nice flexibility of services and checks to monitor should an additional service get added to a server in the future that we hadn’t anticipated.  For the amount of effort it takes to get the checks onto the server and to get working, adding the Nagios plugins is totally worth the effort.

The first step is to add the Nagios plugins to your Chef recipe.  I am using a generic Chef recipe for my Sensu clients that takes care of some of the more tedious tasks including downloading the appropriate scripts and checks for the clients to run as well as some other dependencies and items that Sensu likes to have.  Luckily there is a public Debian package available for installing the Nagios plugins so it easy to add them.  Just add this snippet into your Chef recipe for Sensu clients:

apt_package "nagios-plugins" do 
 action :install 
end

After you run your next chef-client job you will have access to a variety of checks provided by the Nagios plugins package as illustrated below.

nagios checks

There are a number of examples available but to run the check_http for cert expiration by hand you can run this command:

/usr/lib/nagios/plugins/check_http -H <sitename> -C 30,10

Where <sitename> is the URL of the website you would like to check.  Now that we are able to run this check manually, go ahead and roll that in to your Chef recipe for Sensu.  An example of this might look similar to the following:

sensu_check "check_web" do 
  command "/usr/lib/nagios/plugins/check_http -H localhost -C 30,10" 
  handlers ["pagerduty", "slack"] 
  subscribers ["core"] 
  interval 60
  standalone true
  additional(:notification => "Certificate will expire soon", :occurrences => 5) 
end

You may not want to run this check on every host so it may be a good idea to run this check as a stand alone check.  It is simple enough to add this snippet in to any recipe and tack on the “standalone true” attribute to the sensu_check resource.  I have an example of what this standalone attribute looks like in the example above for reference.

Adding in Nagios plugins gives you a very nice set of additional tools to add to your monitoring arsenal for not that much effort.  You never know when something from the Nagios plugins might come in handy so I suggest you try them out.  There are many other uses for the Nagios plugins so I suggest taking a look.

Set up PEM key authentication

Many times it is useful to keys to authenticate to your servers.  This can dramatically improve security and is a great way to manage servers in bulk as well.  You just need to keep track of your keys rather than having to remember a large number of passwords.  The steps to get PEM key authentication are fairly straight forward but it never hurts to walk through the process of getting them set up correctly.

Side Note: I’d like to also mention briefly, that I have these steps set up to work with Chef, so every server that gets deployed using Chef will use PEM keys out of the box, which works out very nicely.  If you’re interested I can expound on that topic a little more, just let me know.

The first step in the process is to generate some keys using openssl.  If you don’t have openssl go download and install it.  If you do have openssl but haven’t updated in ahwile, please update to avoid the heartbleed vulnerability that was recently exploited (nearly all distributors have released the patched version at this point so it should be trivial).

We want to generate our key and create a PEM file out of it.  Here are the steps:

cd ~/.ssh
ssh-keygen -t dsa -b 1024
openssl dsa -in id_dsa -outform pem > test.pem
cat ida_dsa >> authorized_keys

You can leave the values blank (default) in the ssh-keygen.

Now you should have similar listings in your ~/.ssh directory:

ssh keys

  • authorized_keys – This is the public key that the pem file gets authenticated against
  • id_dsa – This is the private portion of the key that we generated in the steps above
  • id_dsa.pub – This is the public key section that is used when authenticating
  • test.pem – this is the file that will be used to authenticate.  Essentially the private key minus the pass phrase

Now you just need to copy the test.pem file that was just generated to a different host in order to log in with your PEM key using scp or rsync.  Once that is done, the command to connect to the remote host using  your key should look similar to the following:

ssh -i /path/to/pem user@server-name

Next steps.  At this point you should have a working pem authentication on your server.  It is probably a good idea at this point to start looking at hardening the security as well as the SSH configuration on the host.  Small things can go a long way.  For example disabling root login, disabling password authentication, etc. will stop a very large amount of attacks from hitting your server now that you are authenticating with pem keys.

Setting up a private git repo in Chef

As a Chef newbie I really might have bit off more than I could chew originally when I was looking at how to get private github repo’s working but am glad I pushed through and got a solution working.  I have no idea if this is the preferred method or if there are any easier ways but this worked for me and so I am hoping that if any other Chef newbies stumble across this problem then they can use this post as a guide or reference.

First, I’d like to give credit where it is due.  I used this post as a template as well as the SSH wrapper section in the deploy documentation on the Chef website.

The first issue I had problems with, is that when you connect to github via SSH it wants the Chef client to accept its public fingerprint.  By default, if you don’t modify anything SSH will just sit there waiting for the fingerprint to be accepted.  That is why the SSH Git wrapper is used, it tells SSH on the Chef client that we don’t care about the authentication to the github server, just accept the key.  Here’s what my ssh git wrapper looks like:

 #!/usr/bin/env bash 
 /usr/bin/env ssh -o "StrictHostKeyChecking=no" -i "/home/vagrant/.ssh/id_rsa" $1 $2

You just need to tell your Chef recipe to use this wrapper script:

# Set up github to use SSH authentication 
cookbook_file "/home/vagrant/.ssh/wrap-ssh4git.sh" do 
  source "wrap-ssh4git.sh" 
  owner "vagrant" 
  mode 00700 
end

The next problem is that when using key authentication, you must specify both a public and a private key.  This isn’t an issue if you are running the server and configs by hand because you can just generate a key on the fly and hand that to github to tell it who you are.  When you are spinning instances up and down you don’t have this luxury (actually you might but it seemed like a pain in the ass).

To get around this, we create a couple of templates in our cookbook to allow our Chef client to connect to github with an already established public and private key, the id_rsa and id_rsa.pub files that are shown.  Here’s what the configs look like in Chef:

# Public key 
template "/home/vagrant/.ssh/id_rsa.pub" do 
  source "id_rsa.pub" 
  owner "vagrant" 
  mode 0600 
end 
 
# Private key 
template "/home/vagrant/.ssh/id_rsa" do 
  source "id_rsa" 
  owner "vagrant" 
  mode 0600 
end

After that is taken care of, the only other minor caveat is that if you are cloning a huge repo then it might timeout unless you override the default timeout value, which is set to 600 seconds (10 mins).  I had some trouble finding this information on the docs but thanks to Seth Vargo I was able to find what I was looking for. This is easy enough to accomplish, just use the following snippet to override the default value

timeout 9999

That should be it.  There are probably other, easier ways to accomplish this and so I definitely think the adage “there’s more than one way to skin a cat” applies here.  If you happen to know another way I’d love to hear it.

A Git Primer for Sysadmins

I am writing this post with the intention of giving readers (and other sysadmins) that are unfamiliar with Git a brief introduction to get up and running as quickly as possible.  There are numerous guides out there but none I have found so far with the spin of getting things working quickly for system administrators.  I recently went through the process of submitting some work of mine to an Open Source project and went through all of the work myself and thought I would write up a little guide to getting started, just in case anyone is interested.  Because of this experience, I thought the process could be streamlined if I presented a simple set of steps and examples for other sysadmins so that they can get started and get comfortable using Git as quickly as possible.  By no means am I advocating that I am a Git expert, I simply know enough to “do stuff” so take this guide for what it is, a simple introduction.  Hopefully it is enough to get the ball rolling and get you hooked on revision control!

Step 1:  If you haven’t already, head over to github and set yourself up with a free account.  The steps are trivial, just like setting up any other account.  You will need to install Git on you machine locally so that you can do everything, sudo apt-get install git in Ubuntu, or, with a little more legwork if you are using Windows.

Step 2:  Once you are all set up and running you can either create a new repository or fork off of somebody else’s repo, which will essentially make a copy of their work to your user account in github.  Since I was contributing to a public project I forked their repo.  This is typically (for beginners at least) done through the website by browsing to the project you are interested in and then clicking the fork button at the top right.

Fork a repo

Once you have copied or forked the project you want, you can begin working on making your own changes and updates to the project.  This is where much of the initial confusion came in for me.  I should make a note here; there are a few things that must be done through the website and others that should be done with the git tool you downloaded and installed earlier.

Step 3:  Clone the repo down to your machine locally to begin making changes.  This is done via command line with the following:

git clone https://github.com/$username/repository.git
git clone https://github.com/jmreicha/curriculum.git

Once you have the repo cloned down to your machine, let’s create a branch which we will use to work on the specific piece of the project we are working on.  There are a few more common commands that are used for working with repos.  In some scenarios the public project may change or get updated while you are working on it so you might want to pull in the most recent changes to your own fork.  The following commands are used to work with public repos and forks.

Fetch new changes from original repo:

git fetch upstream

Merge the fetched changes into your working files:

git merge upstream/master

Pull newest updates and changes from a for or repo:

git pull

Okay, let’s assume that our repository is all up to date and we are ready to start working on the new changes that we want to make.  First, we need to create a local branch to make the changes on.  The command for creating a branch is:

git checkout -b $topic
git checkout -b my_topic

Where $topic is the specific change you will be working on.  Then go ahead and make your changes and/or updates with your favorite text editor.  If you have multiple changes to make it may be a good idea to create a few different branches using the command above.  To switch between branches use the following:

git checkout $branchname
git checkout my_topic

If you are working on a branch and either don’t want to commit the changes or the branch becomes obsolete, you can delete the branch with either of the following commands:

git reset --hard
git branch -d $branchname
git branch -d my_topic

Step 4:  Assuming you are all done with your branches and are finished making your changes the next step is to commit the changes.  This will basically record a snapshot of (the state) the files.  Typically for a commit you will want to do one thing at a time to keep track of things more easily.  So you will make one change and then commit and then repeat if there are numerous changes that need to be made.  The command is:

git commit -am "update goes in here"

Now that we have created a branch, made our changes, written our changes so that git knows about them and can keep track of them, we are ready to go ahead and merge the code that we have changed locally back up to github.  Start by pushing the branch up to the github repo.

git push $remote $branchname
git push origin my_topic

Next, merge your changes.  This is done by changing to the “master” branch and then issuing a merge command on the branch that was created to make changes into.  So here is what this process would look like:

git checkout master
git merge $branchname
git merge my_topic

alternatively if you just want to quickly push out a change to the master branch you can issue a git commit locally and then a git push to update the master branch without merging the items first.

git commit -am "updated message"
git push

I like to use this for local repositories where I just need to get things up quickly.

Step 5:  At this point everything should be accurate on your own personal github page.  You can double check your commit history or look at the last update time to make sure your changes worked their way into your public fork or repo.  If you are contributing to a public project the final step is to issue a pull request on the github site.  Once this is done, wait to hear back from a moderator of the project if your changes need to be fixed before they accept them or wait for a response saying that your changes were good and get merged to the project.

Git, to me, was a little bit confusing at first.  Like I said earlier, I still only have a generic and basic understanding but I think it is important for system administrators to know how to commit code to projects and how to keep track of their own code and projects as well.  Even if the code is only small scripting projects it is still beneficial to publish since github is a public site.  Potential colleagues and employers can browse through your work, which increases your visibility and can even lead to job offers.

If you’re interested in learning more, I found this site to be a great resource.