Microsoft has been making a lot of inroads in the Open Source and Linux communities lately. Linux and Unix purists undoubtedly have been skeptical of this recent shift. Canonical for example, has caught flak for partnering with Microsoft recently. But the times are changing, so instead of resenting this progress, I chose to embrace it. I’ll even admit that I actually like many of the Open Source contributions Microsoft has been making – including a flourishing Github account, as well as an increasingly rich, and cross platform platform set of tools that includes Visual Studio Code, Ubuntu/Bash for Windows, .NET Core and many others.
If you want to take the latest and greatest in Powershell v6 for a spin on a Linux system, I recommend using a Docker container if available. Otherwise just spin up an Ubuntu (14.04+) VM and you should be ready. I do not recommend trying out Powershell for any type of workload outside of experimentation, as it is still in alpha for Linux. The beta v6 release (with Linux support) is around the corner but there is still a lot of ground that needs to be covered to get there. Since Powershell is Open Source you can follow the progress on Github!
If you use the Docker method, just pull and run the container:
docker run -it --rm ubuntu:16.04 bash
Then add the Microsoft Ubuntu repo:
# apt-transport-https is needed for connecting to the MS repo
apt-get update && apt-get install curl apt-transport-https
If it worked, you should see a message for Powershell and a new command prompt:
Copyright (C) 2016 Microsoft Corporation. All rights reserved.
Congratulations, you now have Powershell running in Linux. To take it for a spin, try a few commands out.
Write-Host "Hello Wordl!"
This should print out a hello world message. The Linux release is still in alpha, so there will surely be some discrepancies between Linux and Windows based systems, but the majority of cmdlets should work the same way. For example, I noticed in my testing that the terminal was very flaky. Reverse search (ctrl+r) and the Get-History cmdlet worked well, but arrow key scrolling through history did not.
You can even run Powershell on OSX now if you choose to. I haven’t tried it yet, but is an option for those that are curious. Needless to say, I am looking for the
This is a little bit of a follow up post to the origin post about generating certs with the DNS challenge. I decided to create a little container that can be used to generate a certificate based on the newly renamed dehyrdated script with the extras to make DNS provisioning easy.
A few things have changed in the evolution of Let’s Encrypt and its tooling since the last post was written. First, some of the tools have been renamed so I’ll just try to clear up some of the names if there is any confusion. The official Let’s Encrypt client has been renamed to Certbot. The shell script used to provision the certificates has been renamed as well. What used to be called letsencrypt.sh has been renamed to dehydrated.
The Docker image can be found here. The image is essentially the dehydrated script with a few other dependencies to make the DNS challenge work, including Ruby, a ruby script DNS hook and a few Gems that the script relies on.
The following is an example of how to run the script:
Just replace test.example.com with the desired domain. Make sure that you have the DNS zone added to route53 and make sure the AWS credentials used have the appropriate permissions to read and write records on route53 zone.
The command is essentially the same as the command in the original post but is a lot more convenient to run now because you can specify where on your local system you want to dump the generated certificates to and you can also easily specify/update the AWS credentials.
I’d like to quickly explain the decision to containerize this process. Obviously the dehydrated tool has been designed and written to be a standalone tool but in order to generate certificates using the DNS challenge requires a few extra tidbits to be added. Cooking all of the requirements into a container makes the setup portable so it can be easily automated on different environments and flexible so that it can be run in a variety of setups, with different domain names and AWS credentials. With the container approach, the certs could potentially be dropped out on to a Windows machine running Docker for Windows if desired, for example.
tl;dr This setup may be overkill for some, but it has worked out well for my purposes. Feel free to give it a try if you want to test out creating Certbot certs with the deyhrdated tool in a container.
UPDATE: The letsencrypt.sh script has been renamed to dehydrated. Make sure you are using the updated dehydrated script if you are following this guide.
The Let’s Encrypt project has recently unveiled support for the DNS-01 challenge type for issuing certificates and the official Let’s Encrypt project added support with the recent addition of this PR on Github, which enables challenge support on the server side of things but does not enable the challenge in the client (yet). This is great news for those that are looking for more flexibility and additional options when creating and manage LE certificates. For example, if you are not running a web server and rely strictly on having a DNS record to verify domain ownership, this new DNS challenge option is a great alternative.
I am still learning the ins and outs of LE but so far it has been an overwhelmingly positive experience. I feel like tools like LE will be the future of SSL and certificate creation and management, especially as the ecosystem evolves more in the direction of automation and various industries continue to push for higher levels of security.
One of the big issues with implementing DNS support into a LE client as it currently stands is the large range of public DNS providers that have no standardized API support. It becomes very difficult to automate the process of issuing and renewing certificates with the lack of standardization and API’s using LE. The letsencrypt.sh project mentioned below is nice because it has implemented support for a few of the common DNS providers (AWS, CloudFlare, etc.) as hooks which allow the letsencrytpt.sh client to connect to their API’s and create the necessary DNS records. Additionally, if support for a DNS provider doesn’t exist it is easy to add it by creating your own custom hooks.
letsencrypt.sh is a nice choice because it is flexible and just works. It is essentially an implementation of the LE client, written in bash. This is an attractive option because it is well documented, easy to download and use and is also very straight forward. To use the DNS feature you will need to create a hook, which is responsible for placing the correct challenge in your DNS record.
Here is an example hook that is used for connecting with AWS Route53 for issuing certificates for subdomains. After downloading the example hook script, you need to run a few commands to get things working. You can grab it with the following command.
You also need to make sure you have the Ruby dependencies installed on your system for the script to work. The process of installing gems is pretty simple but there was an issue with the version of awesome_print at the time I made this so I had to install a specific version to get things working. Otherwise, the installation of the other gems was straight forward. NOTE: These gems are specific to the rout53.rb script. If you use another hook that doesn’t require these dependencies you can skip the gems installations.
After you install the dependencies, you can run the letsencrypt script .
You can see a few different options in this command.
The following command specifies the domain in the command (rather than adding a domains.txt file to reference), the custom hook that we have downloaded, and specifies the type of challenge to use, which is the dns-01 challenge.
There are other LE clients out there that are working on implementing DNS support including LEGO and Let’s Encrypt (now called certbot), with more clients adding the additional support and functionality all the time. I like the letsencrypt.sh script because it is simple, easy to use, and it just works out of the box,with only a few tweaks needed.
As mentioned above, I have a feeling that automated certificates are the future as automation is becoming increasingly more common for these traditionally manual types of administration tasks. Getting to know how to issue certificates automatically and learning how to use the tooling to create them is a great skill to have for any DevOps or operations person moving forward.
The new Jenkins pipeline integration in 2.0 is pretty awesome and is a great way to add more automation to Jenkins.In this post we will set up Jenkins so that we can write our own custom libraries for Jenkins to use with the pipeline.
One huge benefit of the new pipeline integration in to the core components of Jenkins is the idea of a Jenkinsfile, which enables a way to automatically execute Jenkins jobs on the repo automatically, similar to the way TravisCI works. Simply place a Jenkinsfile in the root of the repo that you wish to automate, set up your webhook so that events to GitHub automatically trigger the Jenkins job and let Jenkins take care of the build.
There are a few very good guides for getting this functionality setup.
Unfortunately, while these guides/docs are very informative and useful they are somewhat confusing to new users and glaze over a few important steps and details that took me longer than it should have to figure out and understand. Therefore the focus will be on some of the details that the mentioned guides lack, especially setting up the built in Jenkins Git repo for accessing and working with the custom pipeline libraries.
There are many great resources out there already for getting a Jenkins server up and running. For the purposes of this post it will be assumed that you have already create and setup a Jenkins instance, using one of the other Digital Ocean [tutorials](https://www.digitalocean.com/community/tutorials/?q=jenkins).
The first step to getting the pipeline set up, again assuming you have already installed Jenkins 2.0 from one of the guides linked above, is to enable the Jenkins custom Git repo for housing unique pipeline libraries that we will write a little bit later on. There is a section in the workflow plugin tutorial that explains it, but is one of the confusing areas and is buried in the documentation so it will be covered in more detailed below.
In order to be able to clone, read and write to the built in Jenkins Git repo, you will need to add your SSH public key to the server. To do this, the first step will be to configure the server per the SSH plugin and configuring a port to connect to so that the repo can be cloned. This can be found in Jenkins -> Configuration. In this example I used port 2222.
As always security should be a concern, so make sure you have authentication turned on. In this example, I am using GitHub authentication but the process will be similar for other authentication methods. Also make sure you use best practices in locking down any external access by hardening SSH or using other ways of blocking access.
After the Git server has been configured, you will need to add the public key for your user in Jenkins. This was another one of the confusing parts initially. The section to add your personal SSH public key was buried in docs that were glossed over, the page can be found here. The location in Jenkins to add this key is located here,
Notice that we are using port 2222. This is the port we configured via the SSH plugin from above. The port number can be any port you like, just make sure to keep it consistent in the configuration and here.
Working with the pipeline
With the pipeline library repo cloned, we can write a simple function, and then add it back in to Jenkins. By default the repo is called workflowLibs. The basic structure of the repo is to place your custom functions is the vars directory. Within the vars directory you will create a .groovy file for the function and a matching .txt file any documentation of the command you want to add. In our example let’s create a hello function.
Create a hello.groovy and a hello.txt file. Inside the hello.groovy file, add something similar to the following.
echo ‘Hello world!’
Go ahead and commit the hello.groovy file, don’t worry about the hello.txt file.
git add hello.groovy
git commit -m “Hello world example”
git push (You may need to set your upstream)
Obviously this function won’t do much. Once you have pushed the new function you should be able to view it in the dropdown of pipeline functions in the Jenkins build configuration screen.
NOTE: There is a nice little script testing plugin built in as part of the pipeline now called snippet-generator. If you want to test out small scripts on your Jenkins server first before committing and pushing any local changes you can try out your script first in Jenkins. To do this open up any of your Jenkins job configurations and then click the link that says “Pipeline syntax” towards the bottom of the page.
Here’s what the snippet generator looks like.
From the snippet-generator you can build out quite a bit of the functionality that you might want to use as part of your pipeline.
Configuring the job
There are a few different options for setting up Jenkins pipelines. The most common types are the Pipeline or the Multibranch Pipeline. The Pipeline implements all of the scripting capabilities that have been covered so that the power of the Groovy scripting language can be leveraged in the Jenkins job as well as things like application life cycles (via stages) which makes CI/CD much easier.
The Multibranch Pipeline augments the functionality of the Pipeline by adding in the ability to index branches from SCM so that different branches can be easily built and tested. The Multibranch Pipeline is especially useful for larger projects that have many developers and feature branches being worked on because each one of the branches can be tracked separately so developers don’t need to worry as much about overlapping with others work.
Taking the pipeline functionality one step further, it is possible to create a Jenkinsfile with all of the needed pipeline code inside of a repo that so that it can be built automatically. The Jenkinsfile basically is used as a blueprint used to describe the how and what of a project for its build process and can leverage any custom groovy functions that you have written that are on the Jenkins server.
Using a the combination of a GitHub webhook and a Jenkinsfile in a Git repo it is easy to automatically tell your Jenkins server to kick off a build every time a commit or PR happens in GitHub.
Let’s take a look at what an example Jenkinsfile might look like.
// Checkout logic goes here
// Build logic goes here
// Test logic goes here
// Deploy logic goes here
This Jenkinsfile defines various “stages”, which will run through a set of functions described in each stage every time a commit has been pushed or a PR has been opened for a given project. One workflow, as shown above, is to segment the job into build, test, push and deploy stages. Different projects might require different stages so it is nice to have granular control of what the job is doing on a per repo basis.
Bonus: Github Webhooks
Configuring webhooks in GitHub is pretty easy as well. SCM is fairly standard these days for storing source code and there are a number of different Git management tools so the steps should be very similar if using a tool other than GitHub. Setting up a webhook in GitHub can be configured to trigger a Jenkins pipeline build when either a commit is pushed to a branch, like master, or a PR is created for a branch. The advantages of using webhooks should be pretty apparent, as builds are created automatically and can be configured to output their results to various different communication channels like email or Slack or a number of other chat tools. The webhooks are the last step in automating the new pipeline features.
To configure the webhook, first make sure there is a Jenkinsfile in the root directory of the project. After the Jenkinsfile is in place you will need to set up the webhook. Navigate to the project settings that you would like to create a webhook for, select ‘Settings’ -> ‘Webhooks & services’ . From here there is a button to add a new webhook.
Change the Payload URL to point at the Jenkins server, update the Content type to application/x-www-form-urlencoded, and leave the secret section blank. All the other defaults should be fine.
After adding the webhook, create or update the associated job in Jenkins. Make sure the new job is configured as either a pipeline or multibranch pipeline type.
In the job configuration point Jenkins at the GitHub URL of the project.
Also make sure to select the build trigger to ‘Build when a change is pushed to GitHub’.
You may need to configure credentials if you are using private GitHub repos. This step can be done in Jenkins by navigating to ‘Manage Jenkins’ -> ‘Credentials’ -> ‘Global’. Then Choose ‘Add Credentials’ and select the SSH key used in conjunction with GitHub. After the credentials have been set up there should be an option when configuring jobs to use the SSH key to authenticate with GitHub.
Writing the Jenkinsfiles and custom libraries can take a little bit of time initially to get the hang of but are very powerful. If you already have experience writing Groovy, then writing these functions and files should be fairly straight forward.
The move towards pipelines brings about a number of nice features. First, you can keep track of your Jenkins job definition simply by adding a Jenkinsfile to a repo, so you get all of the nice benefits of history and version tracking and one central place to keep your build configurations. Because groovy is such a flexible language, pipelines give developers and engineers more options and creativity in terms of what their build jobs can do.
One gotcha of this process is that there isn’t a great workflow yet for working with the library functions, so there is a lot of trial and error to get custom functionality working correctly. One good way to debug is to set up a test job and watch for errors in the console output when you trigger a build for it. The combination of the snippet generator script tester though this process has become much easier.
Another thing that can be tricky is the Groovy sandbox. It is mostly and annoyance and I would not suggest turning it off, just be aware that it exists and often times needs to be worked around.
There are many more features and things that you can do with the Pipeline so I encourage readers to go out and explore some of the possibilities, the docs linked to above are a good place for exploring many of these features. As the pipeline matures, more and more plugins are adding the ability to be configured via the pipeline workflow, so if something isn’t possible right now it probably will be very soon.
If you’re not familiar already, Rancher is an orchestration and scheduling tool for containers. I have written a little bit about Rancher in the past but haven’t covered much on the specifics about how to manage a Rancher environment. One cool thing about Rancher is its “single pane of glass” approach to managing servers and containers, which allows users and admins to quickly and easily manage complicated environments. In this post I’ll be covering how to quickly and automatically add servers to your Rancher environment.
One of the manual steps that can(and in my opinion should) be automated is the server bootstrapping process. The Rancher web interface allows users to add hosts across different cloud providers (AWS, Azure, GCE, etc) and importantly the ability to add a custom host. This custom host registration is the piece that allows us to automate the host addition process by exposing a registration token via the Rancher API. One important thing to note if you are going to be adding hosts automatically is that you will need to actually create the entries necessary in the environment that you bootstrap servers to. So for example, if you create a new environment you will either need to programatically hit the API or in the web interface navigate to Infrastructure -> Add Host to populate the necessary tokens and entries.
Once you have populated the API with the values needed, you will need to create an API token to allow the server(s) that are bootstrapping to connect to the Rancher server to add themselves. If you haven’t done this before, in the environment you’d like to allow access to navigate to API -> Add Environment API Key -> name it and make a note of key that gets generated.
That’s pretty much all of the prep work you need to do to your Rancher environment for this method to work. The next step is to make a script to bootstrap a server when it gets created. The logic for this bootstrap process can be boiled down to the following snippet.
The script is pretty straight forward. It attempts to gather the internal IP address of the server being created, so that it can add it to the Rancher environment with a unique name. Note that there are a number of variables that need to get set to reflect. One that uses the DNS name of the Rancher server, one for the token that was generated in the step above and one for the project ID, which can be found by navigating to the Environment and then looking at the URL for /env/xxxx.
After we have all the needed information and updated the script, we can curl the Rancher server (this won’t work if you didn’t populate the API in the steps above or if your keys are invalide) with the registration token. Finally, start a docker container with the agent version set (check your Rancher server version and match to that) along with the URL obtained from the curl command.
The final step is to get the script to run when the server is provisioned. There are many ways to do this and this step will vary depending a number of different factors, but in this post I am using Cloud-init for CoreOS on AWS. Cloud-init is used to inject the script into the server and then create a systemd service to run the script the first time the server boots and use the result of the script to run the Rancher agent which allows the server to be picked up by the Rancher server and its environment.
Here is the logic to run the script when the server is booted.
The full version of the cloud-init file can be found here.
After you provision your server and give it a minute to warm up and run the script, check your Rancher environment to see if your server has popped up. If it hasn’t, the first place to start looking is on the server itself that was just created. Run docker logs -f rancher-agent to get information about what went wrong. Usually the problem is pretty obvious.
A brand new server looks something like this.
I typically use Terraform to provision these servers but I feel like covering Terraform here is a little bit out of scope. You can image some really interesting possibilities with auto scaling groups and load balancers that can come and go as your environment changes, one of the beauties of disposable infrastructure as well as infrastructure as code.
If you are interested in seeing how this Rancher bootstrap process fits in with Terraform let me know and I’ll take a stab at writing up a little piece on how to get it working.