Backing up Jenkins configurations to S3

If you have worked with Jenkins for any extended length of time you quickly realize that the Jenkins server configurations can become complicated.  If the server ever breaks and you don’t have a good backup of all the configuration files, it can be extremely painful to recreate all of the jobs that you have configured.  And most recently if you have started using the Jenkins workflow libraries, all of your custom scripts and coding will disappear if you don’t back it up.

Luckily, backing up your Jenkins job configurations is a fairly simple and straight forward process.  Today I will cover one quick and dirty way to backup configs using a Jenkins job.

There are some AWS plugins that will backup your Jenkins configurations but I found that it was just as easy to write a little bit of bash to do the backup, especially since I wanted to backup to S3, which none of the plugins I looked at handle.  In genereal, the plugins I looked at either felt a little bit too heavy for what I was trying to accomplish or didn’t offer the functionality I was looking for.

If you are still interested in using a plugin, here are a few to check out:

Keep reading if none of the above plugins look like a good fit.

The first step is to install the needed dependencies on your Jenkins server.  For the backup method that I will be covering, the only tools that need to be installed are aws cli, tar and rsync.  Tar and rsync should already be installed and to get the aws cli you can download and install it with pip, from the Jenkins server that has the configurations you want to back up.

pip install awscli

After the prerequisites have been installed, you will need to create your Jenkins job.  Click New Item -> Freestyle and input a name for the new job.

jenkins job name

Then you will need to configure the job.

The first step will be figuring out how often you want to run this backup.  A simple strategy would be to backup once a day.  The once per day strategy is illustrated below.

backup periodically

Note the ‘H’ above means to randomize when the job runs over the hour so that if other jobs were configured they would try to space out the load.

The next step is to backup the Jenkins files.  The logic is all written in bash so if you are familiar it should be easy to follow along.

# Delete all files in the workspace
rm -rf *

# Create a directory for the job definitions
mkdir -p $BUILD_ID/jobs

# Copy global configuration files into the workspace
cp $JENKINS_HOME/*.xml $BUILD_ID/

# Copy keys and secrets into the workspace
cp $JENKINS_HOME/identity.key.enc $BUILD_ID/
cp $JENKINS_HOME/secret.key $BUILD_ID/
cp $JENKINS_HOME/secret.key.not-so-secret $BUILD_ID/
cp -r $JENKINS_HOME/secrets $BUILD_ID/

# Copy user configuration files into the workspace
cp -r $JENKINS_HOME/users $BUILD_ID/

# Copy custom Pipeline workflow libraries
cp -r $JENKINS_HOME/workflow-libs $BUILD_ID

# Copy job definitions into the workspace
rsync -am --include='config.xml' --include='*/' --prune-empty-dirs --exclude='*' $JENKINS_HOME/jobs/ $BUILD_ID/jobs/

# Create an archive from all copied files (since the S3 plugin cannot copy folders recursively)
tar czf jenkins-configuration.tar.gz $BUILD_ID/

# Remove the directory so only the tar.gz gets copied to S3
rm -rf $BUILD_ID

Note that I am not backing up the job history because the history isn’t important for my uses.  If the history IS important, make sure to add a line to backup those locations.  Likewise, feel free to modify and/or update anything else in the script if it suits your needs any better.

The last step is to copy the backup to another location.  This is why we installed aws cli earlier.  So here I am just uploading the tar file to an S3 bucket, which is versioned (look up how to configure bucket versioning if you’re not familiar).

export AWS_DEFAULT_REGION="xxx"
export AWS_ACCESS_KEY_ID="xxx"
export AWS_SECRET_ACCESS_KEY="xxx"

# Upload archive to S3
echo "Uploading archive to S3"
aws s3 cp jenkins-configuration.tar.gz s3://<bucket>/jenkins-backup/

# Remove tar.gz after it gets uploaded to S3
rm -rf jenkins-configuration.tar.gz

Replace the AWS_DEFAULT_REGION with the region where the bucket lives (typically us-east-1), make sure to update the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to use an account with access to write to AWS S3 (not covered here).  The final thing to note, <bucket> should be replaced to use your own bucket.

The backup process itself is usually pretty fast unless the Jenkins server has a massive amount of jobs and configurations.  Once you have configured the job, feel free to run it once to test if it works.  If the job worked and returns as completed, go check your S3 bucket and make sure the tar.gz file was uploaded.  If you are using versioning there should just be one file, and if you choose the “show versions” option you will see something similar to the following.

s3 backup

If everything went okay with your backup and upload to s3 you are done.  Common issues configuring this backup method are choosing the correct AWS bucket, region and credentials.  Also, double check where all of your Jenkins configurations live in case there aren’t in a standard location.

Read More

Generate Certbot certificates with a container

This is a little bit of a follow up post to the origin post about generating certs with the DNS challenge.  I decided to create a little container that can be used to generate a certificate based on the newly renamed dehyrdated script with the extras to make DNS provisioning easy.

A few things have changed in the evolution of Let’s Encrypt and its tooling since the last post was written.  First, some of the tools have been renamed so I’ll just try to clear up some of the names if there is any confusion.  The official Let’s Encrypt client has been renamed to Certbot.  The shell script used to provision the certificates has been renamed as well.  What used to be called letsencrypt.sh has been renamed to dehydrated.

The Docker image can be found here.  The image is essentially the dehydrated script with a few other dependencies to make the DNS challenge work, including Ruby, a ruby script DNS hook and a few Gems that the script relies on.

The following is an example of how to run the script:

docker run -it --rm \
    -v $(pwd):/dehydrated \
    -e AWS_ACCESS_KEY_ID="XXX" \
    -e AWS_SECRET_ACCESS_KEY="XXX" \
    jmreicha/dehydrated-dns --cron --domain test.example.com --hook ./route53.rb --challenge dns-01

Just replace test.example.com with the desired domain.  Make sure that you have the DNS zone added to route53 and make sure the AWS credentials used have the appropriate permissions to read and write records on route53 zone.

The command is essentially the same as the command in the original post but is a lot more convenient to run now because you can specify where on your local system you want to dump the generated certificates to and you can also easily specify/update the AWS credentials.

I’d like to quickly explain the decision to containerize this process.  Obviously the dehydrated tool has been designed and written to be a standalone tool but in order to generate certificates using the DNS challenge requires a few extra tidbits to be added.  Cooking all of the requirements into a container makes the setup portable so it can be easily automated on different environments and flexible so that it can be run in a variety of setups, with different domain names and AWS credentials.  With the container approach, the certs could potentially be dropped out on to a Windows machine running Docker for Windows if desired, for example.

tl;dr This setup may be overkill for some, but it has worked out well for my purposes.  Feel free to give it a try if you want to test out creating Certbot certs with the deyhrdated tool in a container.

Read More

Running containers on Windows

There has been a lot of work lately that has gone into bringing Docker containers to the Windows platform.  Docker has been working closely with Microsoft to bring containers to Windows and just announced the availability of Docker on Windows at the latest ignite conference.   So, in this post we will go from 0 to your first Windows container.

This post covers some details about how to get up and running via the Docker app and also manually with some basic Powershell commands.  If you just want things to work as quickly as possible I would suggest the Docker app method, otherwise if you are interested in learning what is happening behind the scenes, you should try the Powershell method.

The prerequisites are basically Windows 10 Anniversary and its required components; which consist of the Docker app if you want to configure it through its GUI or the Windows container feature, and Hyper-V if you want to configure your environment manually.

Configure via Docker app

This is by far the easier of the two methods.  This recent blog post has very good instructions and installation steps which I will step through in this post, adding a few pieces of info that helped me out when going through the installation and configuration process.

After you install the Win 10 Anniversary update, go grab the latest beta version of the Docker Engine, via the Docker for Windows project.  NOTE: THIS METHOD WILL NOT WORK IF YOU DON’T USE BETA 26 OR LATER.  To check, open your Docker app version by clicking on the tray icon and clicking “About Docker” and make sure it says -beta26 or higher.

about docker

After you go through the installation process, you should be able to run Docker containers.  You should also now have access to other Docker tools, including docker-comopse and docker-machine.  To test that things are working run the following command.

docker run hello-world

If the run command worked you are most of the way there.  By default, the Docker engine will be configured to use the Linux based VM to drive its containers.  If you run “docker version” you can see that your Docker server (daemon) is using Linux.

docker version

In order to get things working via Windows, select the option “Switch to Windows containers” in the Docker tray icon.

switch to windows containers

Now run “docker version” again and check what Server architecture is being used.

docker version

As you can see, your system should now be configured to use Windows containers.  Now you can try pulling a Windows based container.

docker pull microsoft/nanoserver

If the pull worked, you are are all set.  There’s a lot going on behind the scenes that the Docker app abstracts but if you want to try enabling Windows support yourself manually, see the instructions below.

Configure with Powershell

If you want to try out Windows native containers without the latest Docker beta check out this guide.  The basic steps are to:

  • Enable the Windows container feature
  • Enable the Hyper-V feature
  • Install Docker client and server

To enable the Windows container feature from the CLI, run the following command from and elevated (admin) Powershell prompt.

Enable-WindowsOptionalFeature -Online -FeatureName containers -All

To enable the Hyper-V feature from the CLI, run the following command from the same elevated prompt.

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All

After you enable Hyper-V you will need to reboot your machine. From the command line the command is “Restart-Computer -Force”.

After the reboot, you will need to either install the Docker engine manually, or just use the Docker app.  Since I have already demonstrated the Docker app method above, here we will just install the Docker engine.  It’s also worth mentioning that if you are using the Docker app method or have used it previously, these commands have been run already so the features should be turned on already, simplifying the process.

The following will download the engine.

Invoke-WebRequest "https://master.dockerproject.org/windows/amd64/docker-1.13.0-dev.zip" -OutFile "$env:TEMP\docker-1.13.0-dev.zip" -UseBasicParsing

Expand the zip into the Program Files path.

Expand-Archive -Path "$env:TEMP\docker-1.13.0-dev.zip" -DestinationPath $env:ProgramFiles

Add the Docker engine to the path.

[Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:\Program Files\Docker", [EnvironmentVariableTarget]::Machine)

Set up Docker to be run as a service.

dockerd --register-service

Finally, start the service.

Start-Service Docker

Then you can try pulling your docker image, as above.

docker pull microsoft/nanoserver

There are some drawback to this method, especially in a dev based environment.

The Powershell method involves a lot of manual effort, especially on a local machine where you just want to test things out quickly.  Obviously the install/config process could be scripted out but that solution isn’t idea for most users.  Another drawback is that you have to manually manage which version of Docker is installed, this method does not update the version automatically.  Using a managed app also installs and manages versions of the other Docker productivity tools, like compose and machine, that make interacting with and managing containers a lot easier.

I can see the Powershell installation method being leveraged in a configuration management scenario or where a specific version of Docker should be deployed on a server.  Servers typically don’t need the other tools and should be pinned at specific version numbers to avoid instability issues and to make sure there aren’t other programs that could potentially cause issues.

While the Docker app is still in beta and the Windows container management component of it is still new, I would still definitely recommend it as a solution.  The app is still in beta but I haven’t had any issues with it yet, outside of a few edge cases and it just makes the Docker experience so much smoother, especially for devs and other folks that are new to Docker who don’t want to muck around the system.

Check out the Docker for Windows forums if you run into any issues.

Read More

Generate a Let’s Encrypt certificate using DNS challenge

UPDATE:  The letsencrypt.sh script has been renamed to dehydrated.  Make sure you are using the updated dehydrated script if you are following this guide.

The Let’s Encrypt project has recently unveiled support for the DNS-01 challenge type for issuing certificates and the official Let’s Encrypt project added support with the recent addition of this PR on Github, which enables challenge support  on the server side of things but does not enable the challenge in the client (yet).  This is great news for those that are looking for more flexibility and additional options when creating and manage LE certificates.  For example, if you are not running a web server and rely strictly on having a DNS record to verify domain ownership, this new DNS challenge option is a great alternative.

I am still learning the ins and outs of LE but so far it has been an overwhelmingly positive experience.  I feel like tools like LE will be the future of SSL and certificate creation and management, especially as the ecosystem evolves more in the direction of automation and various industries continue to push for higher levels of security.

One of the big issues with implementing DNS support into a LE client as it currently stands is the large range of public DNS providers that have no standardized API support.  It becomes very difficult to automate the process of issuing and renewing certificates with the lack of standardization and API’s using LE.  The letsencrypt.sh project mentioned below is nice because it has implemented support for a few of the common DNS providers (AWS, CloudFlare, etc.) as hooks which allow the letsencrytpt.sh client to connect to their API’s and create the necessary DNS records.  Additionally, if support for a DNS provider doesn’t exist it is easy to add it by creating your own custom hooks.

letsencrypt.sh is a nice choice because it is flexible and just works.  It is essentially an implementation of the LE client, written in bash.  This is an attractive option because it is well documented, easy to download and use and is also very straight forward.  To use the DNS feature you will need to create a hook, which is responsible for placing the correct challenge in your DNS record.

Here is an example hook that is used for connecting with AWS Route53 for issuing certificates for subdomains.  After downloading the example hook script, you need to run a few commands to get things working.  You can grab it with the following command.

curl -o route53.rb https://gist.githubusercontent.com/tache/3b6760784c098c9139c6/raw/33fe6e0791a7d40ce7cdf14019b7d31801d4ab05/hook.rb
chmod +x route53.rb

You also need to make sure you have the Ruby dependencies installed on your system for the script to work.  The process of installing gems is pretty simple but there was an issue with the version of awesome_print at the time I made this so I had to install a specific version to get things working.  Otherwise, the installation of the other gems was straight forward.  NOTE: These gems are specific to the rout53.rb script.  If you use another hook that doesn’t require these dependencies you can skip the gems installations.

sudo gem install awesome_print -v 1.6.0
sudo gem install aws-sdk
sudo gem install pry
sudo gem install domainatrix

After you install the dependencies, you can run the letsencrypt script .

./letsencrypt.sh

You can see a few different options in this command.

The following command specifies the domain in the command (rather than adding a domains.txt file to reference), the custom hook that we have downloaded, and specifies the type of challenge to use, which is the dns-01 challenge.

./letsencrypt.sh --cron --domain test.example.com --hook ./route53.rb --challenge dns-01

Make sure you have your AWS credentials configured, otherwise the certificate creation will fail.  Here’s what the output of a successful certificate creation might look like.

#
# !! WARNING !! No main config file found, using default config!
#
Processing test.example.com
 + Signing domains...
 + Generating private key...
 + Generating signing request...
 + Requesting challenge for test.example.com...
-------------------->
 Domain: test.example.com
 Root: example.com
 Stage: deploy_challenge
Challenge: yabPBE9YvPXGFjslRtqXh-qK27QlWQgFlTusqcDzUMQ
{
 :change_info => {
 :id => "/change/C3K8MHKLB6IRKZ",
 :status => "PENDING",
 :submitted_at => 2016-08-08 17:54:50 UTC
 }
}
--------------------<
 + Responding to challenge for test.example.com...
-------------------->
 Domain: test.example.com
 Root: example.com
 Stage: clean_challenge
{
 :change_info => {
 :id => "/change/CE90OICFSN00C",
 :status => "PENDING",
 :submitted_at => 2016-08-08 17:55:15 UTC
 }
}
--------------------<
 + Challenge is valid!
 + Requesting certificate...
 + Checking certificate...
 + Done!
 + Creating fullchain.pem...
-------------------->
 Domain: test.example.com
 Root: test.example.com
 Stage: deploy_cert
 Certs: /Users/jmreicha/test/letsencrypt.sh/certs/test.example.com/cert.pem
--------------------<
 + Done!

The entire process of creation and verification should take less than a minute and when it’s done will drop out a certificate for you.

Here is a dump of the commands used to get from 0 to issuing a certficiate with the dns-01 challenge, assuming you already have AWS set up and configured.

git clone https://github.com/lukas2511/letsencrypt.sh.git
cd letsencrypt
curl -o route53.rb https://gist.githubusercontent.com/tache/3b6760784c098c9139c6/raw/33fe6e0791a7d40ce7cdf14019b7d31801d4ab05/hook.rb
chmod +x route53.rb
sudo gem install aws-sdk pry domainitrix awesome_print:1.6.0
./letsencrypt.sh --cron --domain yourdomain.example.com --hook ./route53.rb --challenge dns-01

Conclusion

There are other LE clients out there that are working on implementing DNS support including LEGO and Let’s Encrypt (now called certbot), with more clients adding the additional support and functionality all the time.  I like the letsencrypt.sh script because it is simple, easy to use, and it just works out of the box,with only a few tweaks needed.

As mentioned above, I have a feeling that automated certificates are the future as automation is becoming increasingly more common for these traditionally manual types of administration tasks.  Getting to know how to issue certificates automatically and learning how to use the tooling to create them is a great skill to have for any DevOps or operations person moving forward.

Read More

xhyve vs vbox driver benchmarks for docker-machine

Getting a usable and productive dev environment working with Docker on OS X is not exactly trivial, although it is getting much better.  If you have spent any time working with docker-machine and Docker on OS X you’ve probably run across some type of roadblock to getting your dev environment working.

If you have used docker-machine you are probably familiar with the Virtualbox driver, the driver that ships as by default.  Obviously it works out of the box but if you have used Virtualbox for any amount of time you have probably discovered some of its quirks.  My biggest gripe thus far with Vbox is that their shared folders technology to sync files between the host and VM is slooooow.  In fact, I have written about my own workaround here.

I have run in to some other performance issues using VBox.  This write up is a very detailed comparison of the performance between VBox and VMWare.  The tl;dr of the post is that that the VMWare hypervisor has better performance.  To Oracle’s credit though, many of these performance issues have actually been addressed in the vbox 5.0.0 release.  So if you aren’t running on 5.x definitely make the jump.  The Docker Toolbox ships the newer release so there is no reason not to upgrade.

Making the jump to VBox 5.x may, and most likely should solve your problems but I have been curious about what other options are out there.  Recently, as of July 2015, the xhyve hypervisor project has been available on OS X.  xhyve is a port of the byhve project, which aims to bring high performance virtualization with a light footprint to OS X.  It is still very young but shows a lot of potential.

Even younger than the xhyve project itself is the xhyve driver for docker-machine.  It is so young that it is still not an officially supported driver yet, though it looks like it is well on its way.  Definitely keep an eye on the xhyve and docker-machine xhyve projects if you are looking for an alternative to either VBox or VMWare.  The xhyve docker-machine driver project has recently closed a ticket to be added to brew so it is much less complicated to get working.

Xhyve installation

I will be going over the bare minimum installation instructions to getting everything working.  If you are interested in more of the details on how to get the xhyve driver, I suggest taking a look at this awesome blog post.  The post goes in to depth on how to install and use the docker-machine xhyve driver if you are interested in a more in depth look at how to get things working.

Make sure you have brew installed first.  You will also need to have brew cask installed.  After you have brew installed you should be able to get it from the command line with the following command.

brew tap caskroom/cask

Once you have cask installed you should be able to install the remaining components.

brew update
brew install xhyve
brew cask install dockertoolbox

This might take a little bit depending on how fast your internet connection is.  After you have the toolbox installed, go grab the docker-machine xhyve driver.

brew install docker-machine-driver-xhyve

If you have the dockertoolbox installed already you might some errors in the output.  This just means there was a version conflict somewhere.  As of docker-machine version 0.5.6_1, support has been added for the xhyve driver.

There is currently a caveat to using this driver where you need to change some permissions.  This should hopefully be fixed in the future but is at least something to be aware of.

sudo chown root:wheel $(brew --prefix)/opt/docker-machine-driver-xhyve/bin/docker-machine-driver-xhyve
sudo chmod u+s $(brew --prefix)/opt/docker-machine-driver-xhyve/bin/docker-machine-driver-xhyve

You will also need to clean out your /etc/exports file if you have made changes.

sudo mv /etc/exports{,.backup} && touch /etc/exports

Then create the machine.

docker-machine create --driver xhyve --xhyve-experimental-nfs-share test

If you can interact with the Docker daemon you should be in business.

Benchmark results

The remainder of the post describes the benchmark and performance results of the VBox driver and the xhyve driver.  If you are only interested in getting the xhyve driver working then feel free to skim through the benchmarks, but be sure to take a look at the conclusion for the final verdict.

Below are the specs of the OS X machine that was used to run my benchmarks.

  • OS X 10.10.5
  • Virtualbox 5.0.12
  • xhyve 0.2.0
  • docker-machine-xyhve 0.2.2

Many of the ideas I used for the benchmark tests were taken from the post linked above.  It shows in great detail the methodology that was used to benchmark each of the drivers, which is useful because it gives some really good insight into the tools that were used and how the tests were performed.

Benchmarking on the boot2docker VM is tricky because it is mostly a read only file system and there is no package manger.  Therefore I relied on running the benchmarks inside containers, using a few different methodologies for my testing.  The first was borrowed from the simple-container-benchmarks project on Dockerhub.  This benchmark test gives a good idea of the overall write performance and CPU performance of a container running inside the VM.  For network performance I used the iperf3 image located on Dockerhub.

Below are the results of a few random runs for both the VBox driver as well as the xhyve driver.  I have left out the specific commands here as they are included in the links to each benchmark.  Use the links to each project for specific instructions on how to run the benchmarks yourself if you are interested.  The results were interesting because I was expecting the xhyve driver to outperform the VBox driver.

Virtualbox results

container benchmark results (FS write and CPU)

Client mode...
Target: 172.17.0.2
------------------------------
Performance benchmarks
------------------------------
dockerhost: tcp://192.168.99.100:2376
host: 172.17.0.2 a8b790317264
eth0: 172.17.0.2
date: Sat Jan 23 02:24:20 UTC 2016

------------------------------
FS write performance
------------------------------
1073741824 bytes (1.1 GB) copied, 2.39743 s, 448 MB/s
1073741824 bytes (1.1 GB) copied, 2.35377 s, 456 MB/s
1073741824 bytes (1.1 GB) copied, 1.9075 s, 563 MB/s
1073741824 bytes (1.1 GB) copied, 2.37838 s, 451 MB/s
1073741824 bytes (1.1 GB) copied, 2.03373 s, 528 MB/s
1073741824 bytes (1.1 GB) copied, 1.94024 s, 553 MB/s
1073741824 bytes (1.1 GB) copied, 1.99546 s, 538 MB/s
1073741824 bytes (1.1 GB) copied, 2.00287 s, 536 MB/s
1073741824 bytes (1.1 GB) copied, 1.5292 s, 702 MB/s
1073741824 bytes (1.1 GB) copied, 1.92617 s, 557 MB/s

------------------------------
CPU performance
------------------------------
268435456 bytes (268 MB) copied, 22.6775 s, 11.8 MB/s
268435456 bytes (268 MB) copied, 22.1466 s, 12.1 MB/s
268435456 bytes (268 MB) copied, 30.7552 s, 8.7 MB/s
268435456 bytes (268 MB) copied, 22.2861 s, 12.0 MB/s
268435456 bytes (268 MB) copied, 22.5571 s, 11.9 MB/s
268435456 bytes (268 MB) copied, 21.9901 s, 12.2 MB/s
268435456 bytes (268 MB) copied, 21.8232 s, 12.3 MB/s
268435456 bytes (268 MB) copied, 31.3903 s, 8.6 MB/s
268435456 bytes (268 MB) copied, 28.1219 s, 9.5 MB/s
268435456 bytes (268 MB) copied, 31.0172 s, 8.7 MB/s

------------------------------
System info
------------------------------
             total       used       free     shared    buffers     cached
Mem:       1019960     313288     706672     113104       7808     132532
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                2
On-line CPU(s) list:   0,1
Thread(s) per core:    1
Core(s) per socket:    2
Socket(s):             1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 58
Stepping:              9
CPU MHz:               2294.770
BogoMIPS:              4589.54
Hypervisor vendor:     KVM
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              3072K

iperf results

Connecting to host 172.17.0.3, port 5201
[  4] local 172.17.0.4 port 39476 connected to 172.17.0.3 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  2.09 GBytes  17.9 Gbits/sec  2321   1.03 MBytes
[  4]   1.00-2.00   sec  2.46 GBytes  21.1 Gbits/sec  496    980 KBytes
[  4]   2.00-3.00   sec  2.24 GBytes  19.3 Gbits/sec  339   1.77 MBytes
[  4]   3.00-4.00   sec  2.54 GBytes  21.8 Gbits/sec  1355    389 KBytes
[  4]   4.00-5.00   sec  2.10 GBytes  18.0 Gbits/sec  106    495 KBytes
[  4]   5.00-6.00   sec  3.00 GBytes  25.7 Gbits/sec  217    411 KBytes
[  4]   6.00-7.00   sec  2.60 GBytes  22.4 Gbits/sec  440   1.72 MBytes
[  4]   7.00-8.00   sec  2.06 GBytes  17.7 Gbits/sec    0   1.72 MBytes
[  4]   8.00-9.00   sec  2.07 GBytes  17.8 Gbits/sec    0   1.72 MBytes
[  4]   9.00-10.00  sec  2.51 GBytes  21.6 Gbits/sec  876    713 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  23.7 GBytes  20.3 Gbits/sec  6150             sender
[  4]   0.00-10.00  sec  23.7 GBytes  20.3 Gbits/sec                  receiver

iperf Done.

xhyve results

container benchmark results (FS write and CPU)

Client mode...
Target: 172.17.0.2
------------------------------
Performance benchmarks
------------------------------
dockerhost:
host: 172.17.0.2 2c8d9ba61eae
eth0: 172.17.0.2
date: Sat Jan 23 02:08:15 UTC 2016

------------------------------
FS write performance
------------------------------
1073741824 bytes (1.1 GB) copied, 8.24671 s, 130 MB/s
1073741824 bytes (1.1 GB) copied, 5.89179 s, 182 MB/s
1073741824 bytes (1.1 GB) copied, 6.05392 s, 177 MB/s
1073741824 bytes (1.1 GB) copied, 5.37728 s, 200 MB/s
1073741824 bytes (1.1 GB) copied, 4.824 s, 223 MB/s
1073741824 bytes (1.1 GB) copied, 5.90409 s, 182 MB/s
1073741824 bytes (1.1 GB) copied, 5.22375 s, 206 MB/s
1073741824 bytes (1.1 GB) copied, 5.07298 s, 212 MB/s
1073741824 bytes (1.1 GB) copied, 5.89058 s, 182 MB/s
1073741824 bytes (1.1 GB) copied, 4.80828 s, 223 MB/s

------------------------------
CPU performance
------------------------------
268435456 bytes (268 MB) copied, 25.478 s, 10.5 MB/s
268435456 bytes (268 MB) copied, 31.3984 s, 8.5 MB/s
268435456 bytes (268 MB) copied, 24.698 s, 10.9 MB/s
268435456 bytes (268 MB) copied, 31.1973 s, 8.6 MB/s
268435456 bytes (268 MB) copied, 23.3705 s, 11.5 MB/s
268435456 bytes (268 MB) copied, 23.3973 s, 11.5 MB/s
268435456 bytes (268 MB) copied, 23.7405 s, 11.3 MB/s
268435456 bytes (268 MB) copied, 23.6118 s, 11.4 MB/s
268435456 bytes (268 MB) copied, 23.5606 s, 11.4 MB/s
268435456 bytes (268 MB) copied, 24.3341 s, 11.0 MB/s

------------------------------
System info
------------------------------
             total       used       free     shared    buffers     cached
Mem:       1020028     291632     728396      70356       6420      89824
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                2
On-line CPU(s) list:   0,1
Thread(s) per core:    1
Core(s) per socket:    1
Socket(s):             2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 58
Stepping:              9
CPU MHz:               2294.450
BogoMIPS:              4607.99
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              3072K

iperf results

Connecting to host 172.17.0.2, port 5201
[  4] local 172.17.0.3 port 49244 connected to 172.17.0.2 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  2.29 GBytes  19.7 Gbits/sec    0   1.90 MBytes
[  4]   1.00-2.00   sec  2.84 GBytes  24.4 Gbits/sec  567    953 KBytes
[  4]   2.00-3.00   sec  2.16 GBytes  18.6 Gbits/sec  327    667 KBytes
[  4]   3.00-4.00   sec  2.32 GBytes  19.9 Gbits/sec  166   1.52 MBytes
[  4]   4.00-5.00   sec  2.63 GBytes  22.6 Gbits/sec  565    769 KBytes
[  4]   5.00-6.00   sec  2.71 GBytes  23.3 Gbits/sec  608    583 KBytes
[  4]   6.00-7.00   sec  2.67 GBytes  22.9 Gbits/sec  217   1.40 MBytes
[  4]   7.00-8.00   sec  2.98 GBytes  25.6 Gbits/sec  782    498 KBytes
[  4]   8.00-9.00   sec  2.80 GBytes  24.0 Gbits/sec  359   1.01 MBytes
[  4]   9.00-10.00  sec  2.43 GBytes  20.9 Gbits/sec  883    467 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  25.8 GBytes  22.2 Gbits/sec  4474             sender
[  4]   0.00-10.00  sec  25.8 GBytes  22.2 Gbits/sec                  receiver

iperf Done.

Conclusion

I was on on the fence about VBox performance but the proof is in the pudding here with the test results.  The VBox driver had significantly better FS write performance (almost 2x).  CPU performance was about equal overall, and network throughput was also very similar.  I suspect CPU performance would be in favor of xhyve if these tests were run using VBox 4.x.  Regardless, equal CPU performance, similar network throughput and significantly better FS writes tip the scale in favor of the VBox driver.

As frustrating as it can be at times to use Vbox, many of its past performance issues have been fixed as of the v5.0 release.  The shared folder issue still exists but is largely taken care of by the great, easy to use tools that the Docker community has written, docker-machine-nfs is my favorite.

Surprisingly, or maybe not THAT surprisingly, xhyve actually performs worse that Virtualbox at this point.  xyhve itself is still a super young project and the docker-machine xhyve driver is still super young so there is definitely some room for growth.  That said, it was very straightforward to get xhyve and the docker-drive installed and configured, so I believe it is just a matter of time before the xhyve driver matures to a point where it can replace other drivers.  One down side of the xhyve driver is that it also suffers from the host to VM shared folder issue and the current best work around is to use the –nfs-share flag that the xhyve docker-machine driver offers.

I will definitely have my eye on the xhyve project moving forward because it looks to be a great alternative to other virtualization technologies for OS X once it reaches a point of maturity.  For now, VBox works more than sufficiently, has been around for a long time, is pretty much ubiquitous across platforms and the developers have shown that they are still actively working on improving the project with the recent 5.0 release.

Read More