Running containers on Windows

There has been a lot of work lately that has gone into bringing Docker containers to the Windows platform.  Docker has been working closely with Microsoft to bring containers to Windows and just announced the availability of Docker on Windows at the latest ignite conference.   So, in this post we will go from 0 to your first Windows container.

This post covers some details about how to get up and running via the Docker app and also manually with some basic Powershell commands.  If you just want things to work as quickly as possible I would suggest the Docker app method, otherwise if you are interested in learning what is happening behind the scenes, you should try the Powershell method.

The prerequisites are basically Windows 10 Anniversary and its required components; which consist of the Docker app if you want to configure it through its GUI or the Windows container feature, and Hyper-V if you want to configure your environment manually.

Configure via Docker app

This is by far the easier of the two methods.  This recent blog post has very good instructions and installation steps which I will step through in this post, adding a few pieces of info that helped me out when going through the installation and configuration process.

After you install the Win 10 Anniversary update, go grab the latest beta version of the Docker Engine, via the Docker for Windows project.  NOTE: THIS METHOD WILL NOT WORK IF YOU DON’T USE BETA 26 OR LATER.  To check, open your Docker app version by clicking on the tray icon and clicking “About Docker” and make sure it says -beta26 or higher.

about docker

After you go through the installation process, you should be able to run Docker containers.  You should also now have access to other Docker tools, including docker-comopse and docker-machine.  To test that things are working run the following command.

docker run hello-world

If the run command worked you are most of the way there.  By default, the Docker engine will be configured to use the Linux based VM to drive its containers.  If you run “docker version” you can see that your Docker server (daemon) is using Linux.

docker version

In order to get things working via Windows, select the option “Switch to Windows containers” in the Docker tray icon.

switch to windows containers

Now run “docker version” again and check what Server architecture is being used.

docker version

As you can see, your system should now be configured to use Windows containers.  Now you can try pulling a Windows based container.

docker pull microsoft/nanoserver

If the pull worked, you are are all set.  There’s a lot going on behind the scenes that the Docker app abstracts but if you want to try enabling Windows support yourself manually, see the instructions below.

Configure with Powershell

If you want to try out Windows native containers without the latest Docker beta check out this guide.  The basic steps are to:

  • Enable the Windows container feature
  • Enable the Hyper-V feature
  • Install Docker client and server

To enable the Windows container feature from the CLI, run the following command from and elevated (admin) Powershell prompt.

Enable-WindowsOptionalFeature -Online -FeatureName containers -All

To enable the Hyper-V feature from the CLI, run the following command from the same elevated prompt.

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All

After you enable Hyper-V you will need to reboot your machine. From the command line the command is “Restart-Computer -Force”.

After the reboot, you will need to either install the Docker engine manually, or just use the Docker app.  Since I have already demonstrated the Docker app method above, here we will just install the Docker engine.  It’s also worth mentioning that if you are using the Docker app method or have used it previously, these commands have been run already so the features should be turned on already, simplifying the process.

The following will download the engine.

Invoke-WebRequest "https://master.dockerproject.org/windows/amd64/docker-1.13.0-dev.zip" -OutFile "$env:TEMP\docker-1.13.0-dev.zip" -UseBasicParsing

Expand the zip into the Program Files path.

Expand-Archive -Path "$env:TEMP\docker-1.13.0-dev.zip" -DestinationPath $env:ProgramFiles

Add the Docker engine to the path.

[Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:\Program Files\Docker", [EnvironmentVariableTarget]::Machine)

Set up Docker to be run as a service.

dockerd --register-service

Finally, start the service.

Start-Service Docker

Then you can try pulling your docker image, as above.

docker pull microsoft/nanoserver

There are some drawback to this method, especially in a dev based environment.

The Powershell method involves a lot of manual effort, especially on a local machine where you just want to test things out quickly.  Obviously the install/config process could be scripted out but that solution isn’t idea for most users.  Another drawback is that you have to manually manage which version of Docker is installed, this method does not update the version automatically.  Using a managed app also installs and manages versions of the other Docker productivity tools, like compose and machine, that make interacting with and managing containers a lot easier.

I can see the Powershell installation method being leveraged in a configuration management scenario or where a specific version of Docker should be deployed on a server.  Servers typically don’t need the other tools and should be pinned at specific version numbers to avoid instability issues and to make sure there aren’t other programs that could potentially cause issues.

While the Docker app is still in beta and the Windows container management component of it is still new, I would still definitely recommend it as a solution.  The app is still in beta but I haven’t had any issues with it yet, outside of a few edge cases and it just makes the Docker experience so much smoother, especially for devs and other folks that are new to Docker who don’t want to muck around the system.

Check out the Docker for Windows forums if you run into any issues.

Read More

Easily login to Rancher containers locally

Sometimes managing containers through the Rancher web console can be tedious and painful.  Especially if you need to copy/paste things into or out of the terminal.  I recently discovered a nice little project on Github called Rancher SSH which allows you to connect to a container running in your Rancher environment as if it was local to the machine you are working on, much like SSH and hence the name.

I am still playing around with the functionality but so far it has been very nice and is very easy to get started with.  To get started you can either install it via Homebrew or with Golang.  I chose to use the homebrew option.

brew install fangli/dev/rancherssh

After it is finished installing (it might take a minute or two), you should have access to the rancherssh command from the CLI.  You might need to source your shell in order to pick up tab completion for the command but you should be able to run the command and get some output.

rancherssh

In order to do anything useful with this tool, you will first need to create an API key for rancherssh in Rancher.  Navigate to the environment you’d like to create the key for and then click the API tab in Rancher.  Then click  the “Add Environment API Key” to bring up the dialogue to create a new key.

add api key

After you create your key make not of the Access key (username)  and Secret key (password).  You will need these to configure rancherssh in the step below.  First, create a file somewhere that is easy to remember, called config.yml and populate it, similar to the following, updating the endpoint, access key and secret key.

endpoint: https://your.rancher.server/v1
user: access_key
password: secret_key

That’s pretty much it.  Make sure the endpoint matches your environment correctly, otherwise you should now be able to connect to a container in your Rancher environment.  You’ll need to make sure you run the rancherssh command from the same directory that you configured your config.yml file, but otherwise it should just work.

rancherssh my-stack_container_1

Optionally you can provide all of the configuration information to the CLI and just skip the config file completely.

rancherssh --endpoint="https://your.rancher.server/v1" --user="access_key" --password="secret_key" my-test-container_1

There is one last thing to mention.  rancherssh provides a nice fuzzy matching mechanism for connecting to containers.  For example, if you can’t remember which containers are available to a stack in Rancher you can run a pattern to match the stack, and rancherssh will tell you which containers are running in the stack and allow you to choose which one to connect to.

ranchserssh %my-stack%

If there are multiple containers this command will allow you to pick which one to connect to.

Searching for container %my-stack%
We found more than one containers in system:
[1] my-stack_container_1, Container ID 1i91308 in project 1a216, IP Address 10.42.154.115
[2] my-stack_container_2, Container ID 1i94034 in project 1a216, IP Address 10.42.119.103
[3] my-stack_container_3, Container ID 1i94036 in project 1a216, IP Address 10.42.146.57

I didn’t have any issues at all getting started with this tool, I would definitely recommend checking it out.  Especially if you do a lot of work in your Rancher containers.  It is fast, easy to use and is really useful for the times that using the Rancher UI is too cumbersome.

Read More

ELK 5 on Docker

This is a little follow up to a post I did awhile back about getting the ELK stack up and running using Docker.  The last post was over a year ago and a lot has changed in regards to both Docker and the ELK stack.

All of the components of the ELK stack have gone through several revisions since the last post and all kinds of features and improvements have been made to all components (Elasticsearch, Logstash and Kibana).  The current iteration is v5 for all of the components.  v5 is still in alpha but that doesn’t mean we can’t get it up and running with Docker.  NOTE: I don’t recommend trying to run ELK v5 in any kind of a setup outside of development at this point since it is still alpha.

Docker has evolved a little bit as well since the last post, which will help with some of the setup.  The improvements in docker-compose will allow us to wrap the new Docker features up in the containers and leverage some cool Docker features.

Here is the updated elk-docker repo.  Please open a PR or issue if you have ideas for improvement or if there are any issues you run into.

For the most part the items in the repo shouldn’t need to change unless you are interested in adjusting the Elasticsearch configuration or you want to update the Logstash input/filter/output configuration.  The Elasticsearch config is located in es/elasticsearch.yml and the Logstash config is located in logstash/logstash.conf.

This configuration has been tested using Docker version 1.11 and docker-compose 1.7 on OS X.

Here’s what the docker-compose file looks like.

version: '2'
services:
  elasticsearch:
  image: elasticsearch:5
  command: elasticsearch
  environment:
    # This helps ES out with memory usage
    - ES_JAVA_OPTS=-Xmx1g -Xms1g
  volumes:
  # Persist elasticsearch data to a volume
    - elasticsearch:/usr/share/elasticsearch/data
    # Extra ES configuration options
    - ./es/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
  ports:
    - "9200:9200"
    - "9300:9300"

  logstash:
  image: logstash:5
  command: logstash --auto-reload -w 4 -f /etc/logstash/conf.d/logstash.conf
  environment:
    # This helps Logstash out if it gets too busy
    - LS_HEAP_SIZE=2048m
  volumes:
    # volume mount the logstash config
    - ./logstash/logstash.conf:/etc/logstash/conf.d/logstash.conf
  ports:
    # Default GELF port
    - "12201:12201/udp"
    # Default UDP port
    - "5000:5000/udp"
    # Default TCP port
    - "5001:5001"
  links:
    - elasticsearch

  kibana:
  image: kibana:5
  environment:
    # Point Kibana to the elasticsearch container
    - ELASTICSEARCH_URL=http://elasticsearch:9200
  ports:
    - "5601:5601"
  links:
    - elasticsearch

  kopf:
  image: rancher/kopf:v0.4.0
  ports:
    - "8080:80"
  environment:
    KOPF_ES_SERVERS: "elasticsearch:9200"
  links:
    - elasticsearch

volumes:
  elasticsearch:

Notice that we are just storing the Elasticsearch data in a Docker volume called “elasticsearch”.  Storing the data in a volume makes it easier to manage.

To start up the ELK stack just run docker-compose up” (plus -d for detatched) and you should see the ELK components start to come up in the docker-compose log messages.  It takes about a minute or so to come up.

After everything has bootstrapped and come up you can see the fruits of your labor.  If you are using the Docker beta app, (which I highly recommend) you can just visit localhost:5601 in your browser.

kibana5

Bonus

To easily get some logs into ELK to start playing around with some data you can run the logspout container like I have below.  This will forward the logs from the Docker daemon to Logstash for you automatically so that you can create a timestamp index in Logstash as above.

docker run --rm --name="logspout" \
 --volume=/var/run/docker.sock:/var/run/docker.sock \
 --publish=127.0.0.1:8000:80 \
 gliderlabs/logspout:master \
 syslog://<local_ip_address>:5001

Edit: 3/10/17

If you are testing locally with the Logtash adapter enabled, you can use the following docker + logspout command to populate logs into Elasticsearch and create an index automatically.

docker run --rm --name="logspout" \ joshuareichardt@Joshuas-MacBook-Pro
 --volume=/var/run/docker.sock:/var/run/docker.sock \
 --publish=127.0.0.1:8000:80 \
 --env DEBUG=1 \
 <logspout_logstash_image> \
 logstash+tcp://<local_ip_address>:5001

The value of <local_ip_address> should be the address of your laptop or desktop, which you can grab with ifconfig.  Optionally you can add the debug flag to help troubleshoot issues by adding the following below the –publish line.

-env DEBUG=1

Then you can can check your local interface to make see packets being sent from logspout to Logstash using tcpdump.  You might need to adjust lo0 to the interface used on the local machine by Docker.

sudo tcpdump -v -s0 -A -i lo0 port 5001

Alternatively, you can use curl if you enabled the http module in logspout.

curl http://127.0.0.1:8000/logs

That’s pretty much all there is to it.  Feel free to tweak the configs if you want to play around with logstash or elasticsearch.  And also please let me know if you have any ideas for improvement or have any issues getting this up and running.

Read More

Bootstrap servers to a Rancher environment

If you’re not familiar already, Rancher is an orchestration and scheduling tool for containers.  I have written a little bit about Rancher in the past but haven’t covered much on the specifics about how to manage a Rancher environment.  One cool thing about Rancher is its “single pane of glass” approach to managing servers and containers, which allows users and admins to quickly and easily manage complicated environments.  In this post I’ll be covering how to quickly and automatically add servers to your Rancher environment.

One of the manual steps that can(and in my opinion should) be automated is the server bootstrapping process.  The Rancher web interface allows users to add hosts across different cloud providers (AWS, Azure, GCE, etc) and importantly the ability to add a custom host.  This custom host registration is the piece that allows us to automate the host addition process by exposing a registration token via the Rancher API.  One important thing to note if you are going to be adding hosts automatically is that you will need to actually create the entries necessary in the environment that you bootstrap servers to.  So for example, if you create a new environment you will either need to programatically hit the API or in the web interface navigate to Infrastructure -> Add Host to populate the necessary tokens and entries.

Once you have populated the API with the values needed, you will need to create an API token to allow the server(s) that are bootstrapping to connect to the Rancher server to add themselves.  If you haven’t done this before, in the environment you’d like to allow access to navigate to API -> Add Environment API Key -> name it and make a note of key that gets generated.

rancher api

That’s pretty much all of the prep work you need to do to your Rancher environment for this method to work.  The next step is to make a script to bootstrap a server when it gets created.  The logic for this bootstrap process can be boiled down to the following snippet.

#!/bin/bash

INTERNAL_IP=$(ip add show eth0 | awk '/inet/ {print $2}' | cut -d/ -f1 | head -1)
SERVER="https://example.com"
TOKEN="access_key:secret_key"
PROJID="unique_environment"
AGENT_VER="v1.0.1"

RANCHER_URL=$(curl -su $TOKEN $SERVER/v1/registrationtokens?projectId=$PROJID | head -1 | grep -nhoe 'registrationUrl[^},]*}' | egrep -hoe 'https?:.*[^"}]')

docker run \
  -e CATTLE_AGENT_IP=$INTERNAL_IP \
  -e CATTLE_HOST_LABELS='your=label' \
  -d --privileged --name rancher-bootstrap \
  -v /var/run/docker.sock:/var/run/docker.sock \
  rancher/agent:$AGENT_VER $RANCHER_URL

The script is pretty straight forward.  It attempts to gather the internal IP address of the server being created, so that it can add it to the Rancher environment with a unique name.  Note that there are a number of variables that need to get set to reflect.   One that uses the DNS name of the Rancher server, one for the token that was generated in the step above and one for the project ID, which can be found by navigating to the Environment and then looking at the URL for /env/xxxx.

After we have all the needed information and updated the script, we can curl the Rancher server (this won’t work if you didn’t populate the API in the steps above or if your keys are invalide) with the registration token.  Finally, start a docker container with the agent version set (check your Rancher server version and match to that) along with the URL obtained from the curl command.

The final step is to get the script to run when the server is provisioned.  There are many ways to do this and this step will vary depending a number of different factors,  but in this post I am using Cloud-init for CoreOS on AWS.  Cloud-init is used to inject the script into the server and then create a systemd service to run the script the first time the server boots and use the result of the script to run the Rancher agent which allows the server to be picked up by the Rancher server and its environment.

Here is the logic to run the script when the server is booted.

coreos:

  units:
  - name: rancher-agent.service
    command: start
    content: |
      [Unit]
      Description=Rancher Agent
      After=docker.service
      Requires=docker.service
      After=network-online.target
      Requires=network-online.target

      [Service]
      Type=oneshot
      RemainAfterExit=yes
      ExecStart=/etc/rancher-agent

The full version of the cloud-init file can be found here.

After you provision your server and give it a minute to warm up and run the script, check your Rancher environment to see if your server has popped up.  If it hasn’t, the first place to start looking is on the server itself that was just created.  Run docker logs -f rancher-agent to get information about what went wrong.  Usually the problem is pretty obvious.

A brand new server looks something like this.

bootstrapped server

I typically use Terraform to provision these servers but I feel like covering Terraform here is a little bit out of scope.  You can image some really interesting possibilities with auto scaling groups and load balancers that can come and go as your environment changes, one of the beauties of disposable infrastructure as well as infrastructure as code.

If you are interested in seeing how this Rancher bootstrap process fits in with Terraform let me know and I’ll take a stab at writing up a little piece on how to get it working.

Read More

Lint your Dockerfiles with Hadolint

If you haven’t already gotten in to the habit of linting your Dockerfiles you should.  Code linting is a common practice in software development which helps find, identify and eliminate issues and bugs before they are ever able to become a problem.  One of the main benefits of linting your code is that it helps identify and eliminate nasty little bugs before they ever have a chance to become a problem.

Taking the concept of linting and appyling it to Dockerfiles gives you a quick and easy way to identify errors quickly before you ever build any Docker images.  By running your Dockerfile through a linter first, you can ensure that there aren’t any structural problems with the logic and instructions specified in your Dockerfiles.  Linting your files is fast and easy (as I will demonstrate) and getting in the habit of adding a linting step to your development workflow is often very useful because not only will the linter help identify hidden issues which you might not otherwise catch right away but it can potentially save hours of troubleshoot later on, so there is some pretty good effort to benefit ratio there.

There are serveral other Docker linting tools around:

But in my experience, these tools have either been overly complicated, don’t detect/catch as many errors and in general just don’t seem to work as well or have as much polish as Hadolint.  I may just have a skewed perspective of these tools but this was my experience when I tried them, so take my evaluation with a grain of salt.  Definitely let me know if you have experience with any of these tools, I may just need to revisit them to get a better perspective.

With that being said, Hadolint offers everything I need and is very easy to use and get started with and makes linting your Dockerfiles is trivially easy, which counts for the most points in my experience.  Another bonus of Hadolint is that the project is fairly active and the author is friendly, so if there are things you’d like to see get added, it shouldn’t be too hard to get some movement on.  You can check out the project on Github for details about how to install and run Hadolint.

Below, I will go over how to get setup and started as well as some basic usage.

Install

If you use Mac OS X there is a brew formula for installing Hadolint.

brew update
brew install hadolint

If you are a Windows user, for now you will have to run Hadolint from within a Docker conainer.

docker run --rm -i lukasmartinelli/hadolint < Dockerfile

If you feel comfortable with the source code you can try building the code locally.  I haven’t attempted that method, so I don’t have instructions here for how to do it.  Go check out the project if you are interested.

Usage

Hadolint helps you find syntax errors and other mistakes that you may not notice in your Dockerfiles otherwise.  It’s easy to get started.  To run Hadolint run the following.

hadolint Dockerfile

If there are any issues, Hadolint will print out the rule number as well as a blurb describing what could potentially be wrong.

DL4000 Specify a maintainer of the Dockerfile
L1 DL3007 Using latest is prone to errors if the image will ever update. Pin the version explicitly to a release tag.
L3 DL3013 Pin versions in pip. Instead of `pip install <package>` use `pip install <package>==<version>`

As with any linting tool, you will definitely get some false positives at some point so just be aware of items that can potentially be ignored.  There is an issue open on Github right now to allow Hadolint to ignore certain rules, which will help eliminate some of the false positives.  For example, in the above snippet, we don’t necessarily care about the maintainer missing so it will be nice to be able to ignore that line.

Here is a complete reference for the all of the linting rules.  The wiki gives examples of what is wrong and how to fix things, which is very helpful.  Additionally, the author is welcoming ideas for additional things to check, so if you have a good idea for a linting rule open up an issue.

That’s it for now.  Good luck and happy Docker linting!

Read More