This is a little bit of a follow up post to the origin post about generating certs with the DNS challenge. I decided to create a little container that can be used to generate a certificate based on the newly renamed dehyrdated script with the extras to make DNS provisioning easy.
A few things have changed in the evolution of Let’s Encrypt and its tooling since the last post was written. First, some of the tools have been renamed so I’ll just try to clear up some of the names if there is any confusion. The official Let’s Encrypt client has been renamed to Certbot. The shell script used to provision the certificates has been renamed as well. What used to be called letsencrypt.sh has been renamed to dehydrated.
The Docker image can be found here. The image is essentially the dehydrated script with a few other dependencies to make the DNS challenge work, including Ruby, a ruby script DNS hook and a few Gems that the script relies on.
The following is an example of how to run the script:
Just replace test.example.com with the desired domain. Make sure that you have the DNS zone added to route53 and make sure the AWS credentials used have the appropriate permissions to read and write records on route53 zone.
The command is essentially the same as the command in the original post but is a lot more convenient to run now because you can specify where on your local system you want to dump the generated certificates to and you can also easily specify/update the AWS credentials.
I’d like to quickly explain the decision to containerize this process. Obviously the dehydrated tool has been designed and written to be a standalone tool but in order to generate certificates using the DNS challenge requires a few extra tidbits to be added. Cooking all of the requirements into a container makes the setup portable so it can be easily automated on different environments and flexible so that it can be run in a variety of setups, with different domain names and AWS credentials. With the container approach, the certs could potentially be dropped out on to a Windows machine running Docker for Windows if desired, for example.
tl;dr This setup may be overkill for some, but it has worked out well for my purposes. Feel free to give it a try if you want to test out creating Certbot certs with the deyhrdated tool in a container.
There has been a lot of work lately that has gone into bringing Docker containers to the Windows platform. Docker has been working closely with Microsoft to bring containers to Windows and just announced the availability of Docker on Windows at the latest ignite conference. So, in this post we will go from 0 to your first Windows container.
This post covers some details about how to get up and running via the Docker app and also manually with some basic Powershell commands. If you just want things to work as quickly as possible I would suggest the Docker app method, otherwise if you are interested in learning what is happening behind the scenes, you should try the Powershell method.
The prerequisites are basically Windows 10 Anniversary and its required components; which consist of the Docker app if you want to configure it through its GUI or the Windows container feature, and Hyper-V if you want to configure your environment manually.
Configure via Docker app
This is by far the easier of the two methods. This recent blog post has very good instructions and installation steps which I will step through in this post, adding a few pieces of info that helped me out when going through the installation and configuration process.
After you install the Win 10 Anniversary update, go grab the latest beta version of the Docker Engine, via the Docker for Windows project. NOTE: THIS METHOD WILL NOT WORK IF YOU DON’T USE BETA 26 OR LATER. To check, open your Docker app version by clicking on the tray icon and clicking “About Docker” and make sure it says -beta26 or higher.
After you go through the installation process, you should be able to run Docker containers. You should also now have access to other Docker tools, including docker-comopse and docker-machine. To test that things are working run the following command.
docker run hello-world
If the run command worked you are most of the way there. By default, the Docker engine will be configured to use the Linux based VM to drive its containers. If you run “docker version” you can see that your Docker server (daemon) is using Linux.
In order to get things working via Windows, select the option “Switch to Windows containers” in the Docker tray icon.
Now run “docker version” again and check what Server architecture is being used.
As you can see, your system should now be configured to use Windows containers. Now you can try pulling a Windows based container.
docker pull microsoft/nanoserver
If the pull worked, you are are all set. There’s a lot going on behind the scenes that the Docker app abstracts but if you want to try enabling Windows support yourself manually, see the instructions below.
Configure with Powershell
If you want to try out Windows native containers without the latest Docker beta check out this guide. The basic steps are to:
Enable the Windows container feature
Enable the Hyper-V feature
Install Docker client and server
To enable the Windows container feature from the CLI, run the following command from and elevated (admin) Powershell prompt.
After you enable Hyper-V you will need to reboot your machine. From the command line the command is “Restart-Computer -Force”.
After the reboot, you will need to either install the Docker engine manually, or just use the Docker app. Since I have already demonstrated the Docker app method above, here we will just install the Docker engine. It’s also worth mentioning that if you are using the Docker app method or have used it previously, these commands have been run already so the features should be turned on already, simplifying the process.
Then you can try pulling your docker image, as above.
docker pull microsoft/nanoserver
There are some drawback to this method, especially in a dev based environment.
The Powershell method involves a lot of manual effort, especially on a local machine where you just want to test things out quickly. Obviously the install/config process could be scripted out but that solution isn’t idea for most users. Another drawback is that you have to manually manage which version of Docker is installed, this method does not update the version automatically. Using a managed app also installs and manages versions of the other Docker productivity tools, like compose and machine, that make interacting with and managing containers a lot easier.
I can see the Powershell installation method being leveraged in a configuration management scenario or where a specific version of Docker should be deployed on a server. Servers typically don’t need the other tools and should be pinned at specific version numbers to avoid instability issues and to make sure there aren’t other programs that could potentially cause issues.
While the Docker app is still in beta and the Windows container management component of it is still new, I would still definitely recommend it as a solution. The app is still in beta but I haven’t had any issues with it yet, outside of a few edge cases and it just makes the Docker experience so much smoother, especially for devs and other folks that are new to Docker who don’t want to muck around the system.
Installing software on Windows in an automatable, repeatable and easy way in Windows has always been painful in the past. Luckily, in recent years there have been some really nice additions to Windows and its ecosystem that have improved the process significantly. The main tools that ease this process are Powershell and Chocolatey and these tools have significantly improved the developer and administrative experiences in Windows.
In the past, in order to install something like a programming language and its environment you would have to manually download the zip or tar file, extract it, put it in the correct place, set up environment variables and system paths manually, etc. Things would also break pretty easily and it was just painful in general to work with.
Hopefully you are already familiar with Powershell at least because I won’t be covering it much in this post. If you have any recent version of Windows you should have Powershell. Below I describe Chocolately a little bit and why it is useful so you can find out more in the post or you can check out the Chocolately website, which does a much better job of explaining its benefits, how it is used and why package managers are good.
Update Windows execution policy
This process is pretty straight forward. Make sure you open up a Powershell prompt with admin privileges, otherwise you will run into problems. The first step is to change the default system execution policy (if you haven’t already). On a fresh install of Windows, you will need to loosen up the security in order to install Chocolatey, which will be used to install and mange Node.js. Luckily there are just a few Powershell commands that need to run. To check the status of the execution policy, run the following.
Get-ExecutionPolicy
Restricted
This should tell you what your execution policy is currently set to. To loosen the policy for Choco, run the following command.
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned
Follow the prompt and choose [Y] to update the policy. Now, if you run Get-ExecutionPolicy you should see RemoteSigned.
Get-ExecutionPolicy
RemoteSigned
If you don’t have your execution policy opened up to at least RemoteSigned, you will have trouble installing things from the internet, including Chocoloatey. You can find more information about Execution Policies here if you don’t trust me or just want a better idea of how they work.
Install Chocolatey
If you aren’t familiar, Chocolately is a package manager for Windows, similar to apt-get on Debian Linux systems or yum on Redhat based systems. It allows users to quickly and easily install and manage software packages on Windows platforms through Powershell.
The steps to installing are Chocolatey are listed below.
This command will take care of pretty much all of the setup so just watch it do its thing. Again, make sure you are inside of an elevated admin shell, otherwise you will likely have problems with the installation.
Install Node.js
The last step (finally) is to install Node.js. Luckily this is the easiest part. Just run the following command.
choco install nodejs.install
Choose [Y] to accept that you want to run the install script and let it run. There should be some colored output and when it is done Node should be installed on your system. You will need to make sure you close and re-open you Powershell prompt to get the Node binaries to be picked up on your PATH, or just source the shell by running “RefreshEnv” to pick up the new path. If you are in an admin shell I would recommend dropping out of it by simply closing the current session and opening up a new, non privileged session.
Once you have a fresh shell you can test that Node installed properly.
node -v
v6.6.0
Now you are ready to go. It only took a few minutes with the Choco package manger. If you are new to Node in general and are looking for a good resource, the learn you the node project on github is pretty decent.
Let me know if you have any caveats to add to this method, it is the easiest and fastest way I have found to installing Node as well as other pieces of software in Windows without any hassle.
If you haven’t heard about it yet, Hyperterm (not to be confused with hyperterminal) is a cool new project that brings javascript to the terminal. Basically, Hyperterm allows for a wide variety of customization and extension to be added to the terminal, yet doesn’t add extra bloat and keeps things fast. For those who don’t know, Hyperterm is based on the electron project which leverages nodejs to build desktop applications that are cross platform.
At its simplest, Hyperterm is a drop in replacement for other shells, like iterm2 or the default terminal app that comes packaged with most OS’s. Since Hyperterm is built on top of node (via Electron) it is by default cross platform so works on Mac and Linux and Windows soon. Obviously this is a win because you can port your configuration to different platforms and don’t need to reconfigure anything, and can also store your configuration in source control so that if your machine ever dies or you get a new one, you have a nice place to pick things up again, which is pretty slick.
If you know javascript, you can already start hacking on the look and feel of Hyperterm, the Chromium browser tools are literally built into it (cmd+option+i).
Installation
To get started, head over to the official Hyperterm website and download the latest release.
Once that is done and you go through the installation process you are ready to get started. Just fire up Hyperterm and you are good to go.
The stock Hyperterm is definitely usable. The real power though, comes from the flexibility and design of the plugin system and configuration files which makes customization really easy to get going with and really powerful.
Configuration
Hyperterm uses its own configuration file to extend the basic functionality. The docs are a great resource for learning more about customization and configuration.
The process of changing themes or adding additional functionality is pretty straight forward. All the plugins that Hyperterm uses are just npm modules, so can be installed and managed via npm. So for example, to change the default theme, you would open up your ~/.hyperterm.js file.
Look for the “plugins” section.
plugins: [],
Add the desired plugin.
plugins: [
'hyperterm-atom-dark',
'hyperline'
],
And then reload hyperterm to pick up the new configuration by pressing (Cmd+Shift+R) or by clicking View -> Reload. You should notice the new theme right away. A nice status line should show up at the bottom of the terminal because of the ‘hyperline’ package, and there was practically no time spent enabling the functionality, which is a big win in my opinion.
For more ideas, definitely go check out the awesome-hyperterm project. This repo is a great place to find out more about hyperterm and other cool projects that are related. The official docs are also a great resource for getting started as well as finding some ideas.
Finally, you can also run,
npm search hyperterm
To get a full listing of npm projects with hyperterm in their name for even more ideas. Outside of the plugins, you can easily hack on the configuration file itself to test out how things work. Again, the config is just javascript so if you know JS it is easy to get started modifying things.
Additionally, you can tweak the configuration by hand to customize things like font sizes, colors, cursor, etc. without having to install or use any plugins. The process to customize these values is similar to installing plugins, just pop open the ~/.hypertem.js file, make any adjustments, then reload the terminal and you should be good to go.
Conclusion
The Hyperterm project is still very new but it is already capable of being the default terminal. As the project grows in popularity, there will be more and more options for customization and the terminal itself will continue to improve. It is exciting to see something new in the terminal emulator space because there are so few options. It will be cool to see what new developments are in the works for the project.
It is definitely hard to adjust to something new but it is also good to get out of your comfort zone sometimes as well. There are lots of things to poke around at and plugins to try out with Hyperterm.
I can’t remember the last time I had this much fun when I was fiddling around with terminal settings. So at the very least, if you don’t switch full time to Hyperterm, give it a try and see if it is a good fit.
Sometimes managing containers through the Rancher web console can be tedious and painful. Especially if you need to copy/paste things into or out of the terminal. I recently discovered a nice little project on Github called Rancher SSH which allows you to connect to a container running in your Rancher environment as if it was local to the machine you are working on, much like SSH and hence the name.
I am still playing around with the functionality but so far it has been very nice and is very easy to get started with. To get started you can either install it via Homebrew or with Golang. I chose to use the homebrew option.
brew install fangli/dev/rancherssh
After it is finished installing (it might take a minute or two), you should have access to the rancherssh command from the CLI. You might need to source your shell in order to pick up tab completion for the command but you should be able to run the command and get some output.
rancherssh
In order to do anything useful with this tool, you will first need to create an API key for rancherssh in Rancher. Navigate to the environment you’d like to create the key for and then click the API tab in Rancher. Then click the “Add Environment API Key” to bring up the dialogue to create a new key.
After you create your key make not of the Access key (username) and Secret key (password). You will need these to configure rancherssh in the step below. First, create a file somewhere that is easy to remember, called config.yml and populate it, similar to the following, updating the endpoint, access key and secret key.
That’s pretty much it. Make sure the endpoint matches your environment correctly, otherwise you should now be able to connect to a container in your Rancher environment. You’ll need to make sure you run the rancherssh command from the same directory that you configured your config.yml file, but otherwise it should just work.
rancherssh my-stack_container_1
Optionally you can provide all of the configuration information to the CLI and just skip the config file completely.
There is one last thing to mention. rancherssh provides a nice fuzzy matching mechanism for connecting to containers. For example, if you can’t remember which containers are available to a stack in Rancher you can run a pattern to match the stack, and rancherssh will tell you which containers are running in the stack and allow you to choose which one to connect to.
ranchserssh %my-stack%
If there are multiple containers this command will allow you to pick which one to connect to.
Searching for container %my-stack%
We found more than one containers in system:
[1] my-stack_container_1, Container ID 1i91308 in project 1a216, IP Address 10.42.154.115
[2] my-stack_container_2, Container ID 1i94034 in project 1a216, IP Address 10.42.119.103
[3] my-stack_container_3, Container ID 1i94036 in project 1a216, IP Address 10.42.146.57
I didn’t have any issues at all getting started with this tool, I would definitely recommend checking it out. Especially if you do a lot of work in your Rancher containers. It is fast, easy to use and is really useful for the times that using the Rancher UI is too cumbersome.