Running containers on Windows

There has been a lot of work lately that has gone into bringing Docker containers to the Windows platform.  Docker has been working closely with Microsoft to bring containers to Windows and just announced the availability of Docker on Windows at the latest ignite conference.   So, in this post we will go from 0 to your first Windows container.

This post covers some details about how to get up and running via the Docker app and also manually with some basic Powershell commands.  If you just want things to work as quickly as possible I would suggest the Docker app method, otherwise if you are interested in learning what is happening behind the scenes, you should try the Powershell method.

The prerequisites are basically Windows 10 Anniversary and its required components; which consist of the Docker app if you want to configure it through its GUI or the Windows container feature, and Hyper-V if you want to configure your environment manually.

Configure via Docker app

This is by far the easier of the two methods.  This recent blog post has very good instructions and installation steps which I will step through in this post, adding a few pieces of info that helped me out when going through the installation and configuration process.

After you install the Win 10 Anniversary update, go grab the latest beta version of the Docker Engine, via the Docker for Windows project.  NOTE: THIS METHOD WILL NOT WORK IF YOU DON’T USE BETA 26 OR LATER.  To check, open your Docker app version by clicking on the tray icon and clicking “About Docker” and make sure it says -beta26 or higher.

about docker

After you go through the installation process, you should be able to run Docker containers.  You should also now have access to other Docker tools, including docker-comopse and docker-machine.  To test that things are working run the following command.

docker run hello-world

If the run command worked you are most of the way there.  By default, the Docker engine will be configured to use the Linux based VM to drive its containers.  If you run “docker version” you can see that your Docker server (daemon) is using Linux.

docker version

In order to get things working via Windows, select the option “Switch to Windows containers” in the Docker tray icon.

switch to windows containers

Now run “docker version” again and check what Server architecture is being used.

docker version

As you can see, your system should now be configured to use Windows containers.  Now you can try pulling a Windows based container.

docker pull microsoft/nanoserver

If the pull worked, you are are all set.  There’s a lot going on behind the scenes that the Docker app abstracts but if you want to try enabling Windows support yourself manually, see the instructions below.

Configure with Powershell

If you want to try out Windows native containers without the latest Docker beta check out this guide.  The basic steps are to:

  • Enable the Windows container feature
  • Enable the Hyper-V feature
  • Install Docker client and server

To enable the Windows container feature from the CLI, run the following command from and elevated (admin) Powershell prompt.

Enable-WindowsOptionalFeature -Online -FeatureName containers -All

To enable the Hyper-V feature from the CLI, run the following command from the same elevated prompt.

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All

After you enable Hyper-V you will need to reboot your machine. From the command line the command is “Restart-Computer -Force”.

After the reboot, you will need to either install the Docker engine manually, or just use the Docker app.  Since I have already demonstrated the Docker app method above, here we will just install the Docker engine.  It’s also worth mentioning that if you are using the Docker app method or have used it previously, these commands have been run already so the features should be turned on already, simplifying the process.

The following will download the engine.

Invoke-WebRequest "https://master.dockerproject.org/windows/amd64/docker-1.13.0-dev.zip" -OutFile "$env:TEMP\docker-1.13.0-dev.zip" -UseBasicParsing

Expand the zip into the Program Files path.

Expand-Archive -Path "$env:TEMP\docker-1.13.0-dev.zip" -DestinationPath $env:ProgramFiles

Add the Docker engine to the path.

[Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:\Program Files\Docker", [EnvironmentVariableTarget]::Machine)

Set up Docker to be run as a service.

dockerd --register-service

Finally, start the service.

Start-Service Docker

Then you can try pulling your docker image, as above.

docker pull microsoft/nanoserver

There are some drawback to this method, especially in a dev based environment.

The Powershell method involves a lot of manual effort, especially on a local machine where you just want to test things out quickly.  Obviously the install/config process could be scripted out but that solution isn’t idea for most users.  Another drawback is that you have to manually manage which version of Docker is installed, this method does not update the version automatically.  Using a managed app also installs and manages versions of the other Docker productivity tools, like compose and machine, that make interacting with and managing containers a lot easier.

I can see the Powershell installation method being leveraged in a configuration management scenario or where a specific version of Docker should be deployed on a server.  Servers typically don’t need the other tools and should be pinned at specific version numbers to avoid instability issues and to make sure there aren’t other programs that could potentially cause issues.

While the Docker app is still in beta and the Windows container management component of it is still new, I would still definitely recommend it as a solution.  The app is still in beta but I haven’t had any issues with it yet, outside of a few edge cases and it just makes the Docker experience so much smoother, especially for devs and other folks that are new to Docker who don’t want to muck around the system.

Check out the Docker for Windows forums if you run into any issues.

Read More

Quickly get Node.js up and running on Windows

Installing software on Windows in an automatable, repeatable and easy way in Windows has always been painful in the past.  Luckily, in recent years there have been some really nice additions to Windows and its ecosystem that have improved the process significantly.  The main tools that ease this process are Powershell and Chocolatey and these tools have significantly improved the developer  and administrative experiences in Windows.

In the past, in order to install something like a programming language and its environment you would have to manually download the zip or tar file, extract it, put it in the correct place, set up environment variables and system paths manually, etc.  Things would also break pretty easily and it was just painful in general to work with.

Hopefully you are already familiar with Powershell at least because I won’t be covering it much in this post.  If you have any recent version of Windows you should have Powershell.  Below I describe Chocolately a little bit and why it is useful so you can find out more in the post or you can check out the Chocolately website, which does a much better job of explaining its benefits, how it is used and why package managers are good.

Update Windows execution policy

This process is pretty straight forward.  Make sure you open up a Powershell prompt with admin privileges, otherwise you will run into problems.  The first step is to change the default system execution policy (if you haven’t already).  On a fresh install of Windows, you will need to loosen up the security in order to install Chocolatey, which will be used to install and mange Node.js.  Luckily there are just a few Powershell commands that need to run.  To check the status of the execution policy, run the following.

Get-ExecutionPolicy
Restricted

This should tell you what your execution policy is currently set to.  To loosen the policy for Choco, run the following command.

Set-ExecutionPolicy -ExecutionPolicy RemoteSigned

Follow the prompt and choose [Y] to update the policy.  Now, if you run Get-ExecutionPolicy you should see RemoteSigned.

Get-ExecutionPolicy
RemoteSigned

If you don’t have your execution policy opened up to at least RemoteSigned, you will have trouble installing things from the internet, including Chocoloatey.  You can find more information about Execution Policies here if you don’t trust me or just want a better idea of how they work.

Install Chocolatey

If you aren’t familiar, Chocolately is a package manager for Windows, similar to apt-get on Debian Linux systems or yum on Redhat based systems.  It allows users to quickly and easily install and manage software packages on Windows platforms through Powershell.

The steps to installing are Chocolatey are listed below.

iwr https://chocolatey.org/install.ps1 -UseBasicParsing | iex

This command will take care of pretty much all of the setup so just watch it do its thing.  Again, make sure you are inside of an elevated admin shell, otherwise you will likely have problems with the installation.

Install Node.js

The last step (finally) is to install Node.js.  Luckily this is the easiest part.  Just run the following command.

choco install nodejs.install

Choose [Y] to accept that you want to run the install script and let it run.  There should be some colored output and when it is done Node should be installed on your system.  You will need to make sure you close and re-open you Powershell prompt to get the Node binaries to be picked up on your PATH, or just source the shell by running “RefreshEnv” to pick up the new path.  If you are in an admin shell I would recommend dropping out of it by simply closing the current session and opening up a new, non privileged session.

install node

Once you have a fresh shell you can test that Node installed properly.

node -v
v6.6.0

Now you are ready to go.  It only took a few minutes with the Choco package manger.  If you are new to Node in general and are looking for a good resource, the learn you the node project on github is pretty decent.

Let me know if you have any caveats to add to this method, it is the easiest and fastest way I have found to installing Node as well as other pieces of software in Windows without any hassle.

Read More

Intro to Hyperterm

If you haven’t heard about it yet, Hyperterm (not to be confused with hyperterminal) is a cool new project that brings javascript to the terminal.  Basically, Hyperterm allows for a wide variety of customization and extension to be added to the terminal, yet doesn’t add extra bloat and keeps things fast.  For those who don’t know, Hyperterm is based on the electron project which leverages nodejs to build desktop applications that are cross platform.

At its simplest, Hyperterm is a drop in replacement for other shells, like iterm2 or the default terminal app that comes packaged with most OS’s.  Since Hyperterm is built on top of node (via Electron) it is by default cross platform so works on  Mac and Linux and Windows soon.  Obviously this is a win because you can port your configuration to different platforms and don’t need to reconfigure anything, and can also store your configuration in source control so that if your machine ever dies or you get a new one, you have a nice place to pick things up again, which is pretty slick.

If you know javascript, you can already start hacking on the look and feel of Hyperterm, the Chromium browser tools are literally built into it (cmd+option+i).

Installation

To get started, head over to the official Hyperterm website and download the latest release.

Once that is done and you go through the installation process you are ready to get started.  Just fire up Hyperterm and you are good to go.

hyperterm

The stock Hyperterm is definitely usable.  The real power though, comes from the flexibility and design of the plugin system and configuration files which makes customization really easy to get going with and really powerful.

Configuration

Hyperterm uses its own configuration file to extend the basic functionality.  The docs are a great resource for learning more about customization and configuration.

The process of changing themes or adding additional functionality is pretty straight forward.  All the plugins that Hyperterm uses are just npm modules, so can be installed and managed via npm.  So for example, to change the default theme, you would open up your ~/.hyperterm.js file.

Look for the “plugins” section.

plugins: [],

Add the desired plugin.

plugins: [
    'hyperterm-atom-dark',
    'hyperline'
],

And then reload hyperterm to pick up the new configuration by pressing (Cmd+Shift+R) or by clicking View -> Reload.  You should notice the new theme right away.  A nice status line should show up at the bottom of the terminal because of the ‘hyperline’ package, and there was practically no time spent enabling the functionality, which is a big win in my opinion.

For more ideas, definitely go check out the awesome-hyperterm project.  This repo is a great place to find out more about hyperterm and other cool projects that are related.  The official docs are also a great resource for getting started as well as finding some ideas.

Finally, you can also run,

npm search hyperterm

To get a full listing of npm projects with hyperterm in their name for even more ideas.  Outside of the plugins, you can easily hack on the configuration file itself to test out how things work.  Again, the config is just javascript so if you know JS it is easy to get started modifying things.

Additionally, you can tweak the configuration by hand to customize things like font sizes, colors, cursor, etc. without having to install or use any plugins.  The process to customize these values is similar to installing plugins, just pop open the ~/.hypertem.js file, make any adjustments, then reload the terminal and you should be good to go.

Conclusion

The Hyperterm project is still very new but it is already capable of being the default terminal.  As the project grows in popularity, there will be more and more options for customization and the terminal itself will continue to improve.  It is exciting to see something new in the terminal emulator space because there are so few options.  It will be cool to see what new developments are in the works for the project.

It is definitely hard to adjust to something new but it is also good to get out of your comfort zone sometimes as well.  There are lots of things to poke around at and plugins to try out with Hyperterm.

I can’t remember the last time I had this much fun when I was fiddling around with terminal settings.  So at the very least, if you don’t switch full time to Hyperterm, give it a try and see if it is a good fit.

Read More

Easily login to Rancher containers locally

Sometimes managing containers through the Rancher web console can be tedious and painful.  Especially if you need to copy/paste things into or out of the terminal.  I recently discovered a nice little project on Github called Rancher SSH which allows you to connect to a container running in your Rancher environment as if it was local to the machine you are working on, much like SSH and hence the name.

I am still playing around with the functionality but so far it has been very nice and is very easy to get started with.  To get started you can either install it via Homebrew or with Golang.  I chose to use the homebrew option.

brew install fangli/dev/rancherssh

After it is finished installing (it might take a minute or two), you should have access to the rancherssh command from the CLI.  You might need to source your shell in order to pick up tab completion for the command but you should be able to run the command and get some output.

rancherssh

In order to do anything useful with this tool, you will first need to create an API key for rancherssh in Rancher.  Navigate to the environment you’d like to create the key for and then click the API tab in Rancher.  Then click  the “Add Environment API Key” to bring up the dialogue to create a new key.

add api key

After you create your key make not of the Access key (username)  and Secret key (password).  You will need these to configure rancherssh in the step below.  First, create a file somewhere that is easy to remember, called config.yml and populate it, similar to the following, updating the endpoint, access key and secret key.

endpoint: https://your.rancher.server/v1
user: access_key
password: secret_key

That’s pretty much it.  Make sure the endpoint matches your environment correctly, otherwise you should now be able to connect to a container in your Rancher environment.  You’ll need to make sure you run the rancherssh command from the same directory that you configured your config.yml file, but otherwise it should just work.

rancherssh my-stack_container_1

Optionally you can provide all of the configuration information to the CLI and just skip the config file completely.

rancherssh --endpoint="https://your.rancher.server/v1" --user="access_key" --password="secret_key" my-test-container_1

There is one last thing to mention.  rancherssh provides a nice fuzzy matching mechanism for connecting to containers.  For example, if you can’t remember which containers are available to a stack in Rancher you can run a pattern to match the stack, and rancherssh will tell you which containers are running in the stack and allow you to choose which one to connect to.

ranchserssh %my-stack%

If there are multiple containers this command will allow you to pick which one to connect to.

Searching for container %my-stack%
We found more than one containers in system:
[1] my-stack_container_1, Container ID 1i91308 in project 1a216, IP Address 10.42.154.115
[2] my-stack_container_2, Container ID 1i94034 in project 1a216, IP Address 10.42.119.103
[3] my-stack_container_3, Container ID 1i94036 in project 1a216, IP Address 10.42.146.57

I didn’t have any issues at all getting started with this tool, I would definitely recommend checking it out.  Especially if you do a lot of work in your Rancher containers.  It is fast, easy to use and is really useful for the times that using the Rancher UI is too cumbersome.

Read More

Generate a Let’s Encrypt certificate using DNS challenge

UPDATE:  The letsencrypt.sh script has been renamed to dehydrated.  Make sure you are using the updated dehydrated script if you are following this guide.

The Let’s Encrypt project has recently unveiled support for the DNS-01 challenge type for issuing certificates and the official Let’s Encrypt project added support with the recent addition of this PR on Github, which enables challenge support  on the server side of things but does not enable the challenge in the client (yet).  This is great news for those that are looking for more flexibility and additional options when creating and manage LE certificates.  For example, if you are not running a web server and rely strictly on having a DNS record to verify domain ownership, this new DNS challenge option is a great alternative.

I am still learning the ins and outs of LE but so far it has been an overwhelmingly positive experience.  I feel like tools like LE will be the future of SSL and certificate creation and management, especially as the ecosystem evolves more in the direction of automation and various industries continue to push for higher levels of security.

One of the big issues with implementing DNS support into a LE client as it currently stands is the large range of public DNS providers that have no standardized API support.  It becomes very difficult to automate the process of issuing and renewing certificates with the lack of standardization and API’s using LE.  The letsencrypt.sh project mentioned below is nice because it has implemented support for a few of the common DNS providers (AWS, CloudFlare, etc.) as hooks which allow the letsencrytpt.sh client to connect to their API’s and create the necessary DNS records.  Additionally, if support for a DNS provider doesn’t exist it is easy to add it by creating your own custom hooks.

letsencrypt.sh is a nice choice because it is flexible and just works.  It is essentially an implementation of the LE client, written in bash.  This is an attractive option because it is well documented, easy to download and use and is also very straight forward.  To use the DNS feature you will need to create a hook, which is responsible for placing the correct challenge in your DNS record.

Here is an example hook that is used for connecting with AWS Route53 for issuing certificates for subdomains.  After downloading the example hook script, you need to run a few commands to get things working.  You can grab it with the following command.

curl -o route53.rb https://gist.githubusercontent.com/tache/3b6760784c098c9139c6/raw/33fe6e0791a7d40ce7cdf14019b7d31801d4ab05/hook.rb
chmod +x route53.rb

You also need to make sure you have the Ruby dependencies installed on your system for the script to work.  The process of installing gems is pretty simple but there was an issue with the version of awesome_print at the time I made this so I had to install a specific version to get things working.  Otherwise, the installation of the other gems was straight forward.  NOTE: These gems are specific to the rout53.rb script.  If you use another hook that doesn’t require these dependencies you can skip the gems installations.

sudo gem install awesome_print -v 1.6.0
sudo gem install aws-sdk
sudo gem install pry
sudo gem install domainatrix

After you install the dependencies, you can run the letsencrypt script .

./letsencrypt.sh

You can see a few different options in this command.

The following command specifies the domain in the command (rather than adding a domains.txt file to reference), the custom hook that we have downloaded, and specifies the type of challenge to use, which is the dns-01 challenge.

./letsencrypt.sh --cron --domain test.example.com --hook ./route53.rb --challenge dns-01

Make sure you have your AWS credentials configured, otherwise the certificate creation will fail.  Here’s what the output of a successful certificate creation might look like.

#
# !! WARNING !! No main config file found, using default config!
#
Processing test.example.com
 + Signing domains...
 + Generating private key...
 + Generating signing request...
 + Requesting challenge for test.example.com...
-------------------->
 Domain: test.example.com
 Root: example.com
 Stage: deploy_challenge
Challenge: yabPBE9YvPXGFjslRtqXh-qK27QlWQgFlTusqcDzUMQ
{
 :change_info => {
 :id => "/change/C3K8MHKLB6IRKZ",
 :status => "PENDING",
 :submitted_at => 2016-08-08 17:54:50 UTC
 }
}
--------------------<
 + Responding to challenge for test.example.com...
-------------------->
 Domain: test.example.com
 Root: example.com
 Stage: clean_challenge
{
 :change_info => {
 :id => "/change/CE90OICFSN00C",
 :status => "PENDING",
 :submitted_at => 2016-08-08 17:55:15 UTC
 }
}
--------------------<
 + Challenge is valid!
 + Requesting certificate...
 + Checking certificate...
 + Done!
 + Creating fullchain.pem...
-------------------->
 Domain: test.example.com
 Root: test.example.com
 Stage: deploy_cert
 Certs: /Users/jmreicha/test/letsencrypt.sh/certs/test.example.com/cert.pem
--------------------<
 + Done!

The entire process of creation and verification should take less than a minute and when it’s done will drop out a certificate for you.

Here is a dump of the commands used to get from 0 to issuing a certficiate with the dns-01 challenge, assuming you already have AWS set up and configured.

git clone https://github.com/lukas2511/letsencrypt.sh.git
cd letsencrypt
curl -o route53.rb https://gist.githubusercontent.com/tache/3b6760784c098c9139c6/raw/33fe6e0791a7d40ce7cdf14019b7d31801d4ab05/hook.rb
chmod +x route53.rb
sudo gem install aws-sdk pry domainitrix awesome_print:1.6.0
./letsencrypt.sh --cron --domain yourdomain.example.com --hook ./route53.rb --challenge dns-01

Conclusion

There are other LE clients out there that are working on implementing DNS support including LEGO and Let’s Encrypt (now called certbot), with more clients adding the additional support and functionality all the time.  I like the letsencrypt.sh script because it is simple, easy to use, and it just works out of the box,with only a few tweaks needed.

As mentioned above, I have a feeling that automated certificates are the future as automation is becoming increasingly more common for these traditionally manual types of administration tasks.  Getting to know how to issue certificates automatically and learning how to use the tooling to create them is a great skill to have for any DevOps or operations person moving forward.

Read More