Limit Jenkins Multibranch Pipeline Builds

Update: Thanks to Torben Kerr for adding instructions for setting up these properties at the multibranch build level rather then for each Jenkinsfile.  You can check out his “fix” here.

As the Jenkins pipeline functionality continues to rapidly evolve – the project documentation (or lack thereof), has been a consistent pain point as a user. Invariably, the documentation is either out of date or completely missing.  I expect the docs to improve as the project matures, but for now, the cake is a lie.  I ran into this roadblock recently, looking for a way to limit the number of concurrent builds that happen in Jenkins, using the pipeline.  In all of my anguish, I hope this post will help others in avoiding the tediousness of finding the seemingly simple functionality of limiting concurrent builds, as well as give some insight into strategies for figuring out how to find undocumented features in Jenkins.

While this feature is fairly obvious for old-style Jenkins jobs, a simple check box in the job configuration – finding the same functionality for pipelines is seemingly non existent.  Through extensive Googling and Stack Overflowing, I discovered this feature was recently added to the Multibranch plugin.  Specifically, I found an issue in the (awful) issue tracker used by Jenkins, which in turn led me to uncover some code in a semi recent PR that basically allows concurrency to be turned on or off.  Of course when I tried to use the code from the PR it didn’t work right away.  So I had to go deeper.

Eventually, I  stumbled across a SO post that discusses how to use the properties functionality of pipelines.  Equipped with this new piece of information, I finally had enough substance to start playing around with the code.  To make the creation of pipelines easier, Jenkins also recently added a snippet generator, which allows users to build out sample snippets quickly.

To use the snippet generator, either drill into an existing pipeline style job using a similar URL as below:

https://jenkins.example.com/job/<jobname>/pipeline-syntax/

Or create a new job, and click on the “Pipeline Syntax” link after it has been created to test out different snippets.

pipeline syntax

Inside the snippet generator there are a number of “steps” to choose from.  From the information I had already gathered, I just selected the properties step to create the basic skeleton of what I wanted and was able to use the disableConcurrentBuilds() function I found earlier. Below is a snippet of what the code in your Jenkinsfile might actually look like:

node {
 // This oneliner is what limits concurrent builds
 properties([disableConcurrentBuilds()])

 // Do stuff
 ...
}

Yep.  That’s it.  Just make sure to put the properties() function at the beginning of the node block, otherwise concurrency won’t be adjusted right away and could lead to problems.  Another thing to note; the step to disable concurrency could just as easily be moved into workflow libraries and applied at the global level and applied at the beginning of all jobs if you wanted to limit concurrency for all pipeline builds, since the code is just Groovy.  Finally, the code will disable concurrent builds on a per branch basis.  Essentially, if you push many different branches it will still build all of them, it will just limit each branch to one build at a time and will queue up jobs for any commits that get pushed after the initial job has been created.  I know that is a mouthful.  Let me know in the comments if this explanation needs any clarification.

While I love open source software, sometimes project’s move so fast that certain areas of it get neglected.  I am thankful for things like Github, because I was able use it to piece together all the other information I found to come up with a solution.  But, I would argue having good documentation not only saves folks like me the time and energy of the crazy searches, it also makes it much easier for potentially new users to look at, and understand what is going on.  I will be 100% honest and say that Jenkins pipelines are not for the faint of heart, and I’m sure there are many others who will agree with this sentiment.  I know it is easier said than done, but anything right now would be an improvement in my opinion.

Read More

Hide file extensions in PowerShell tab completion

One thing I have quickly discovered as I get acclimated to my new Windows machine is that by default the Windows Powershell CLI appends the executable file extension to the command that gets run, which is not the case on Linux or OSX.  That got me wondering if it is possible to modify this default behavior and remove the extension.  I’m going to ruin the surprise and let everybody know that it is definitely possible change this behavior, thanks to the flexibility of Powershell and friends.  Now that the surprise is ruined, read on to find out how this solution works.

To check which file types Windows considers to be executable you can type $Env:PathExt.

PS > $Env:PathExt
.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC;.PY;.PYW;.CPL

Similarly, you can type $Env:Path to get a list of places that Windows will look for files to execute by default.

PS > $Env:PATH
C:\Program Files\Docker\Docker\Resources\bin;C:\Python35\Scripts\;C:\Python35\;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files (x86)\NVIDIA Co
ram Files\nodejs\;C:\Program Files\Git\cmd;C:\Program Files (x86)\Skype\Phone\;C:\Users\jmreicha\AppData\Local\Microsoft\WindowsApps;C:\Users\jmreicha\AppData\Local\atom\bin;C:\Users\jmreicha\AppData\Roaming\npm

The problem though, is that when you start typing in an extension that is part of this path, say python, and tab complete it, Windows will automatically append the file extension to the executable.  Since I am more comfortable using a *nix style shell it is an annoyance having to deal with the file extensions.

Below I will show you a hack for hiding these from you Powershell prompt.  It is actually much more work than I thought to add this behavior but with some help from some folks over at stackoverflow, we can add it.  Basically, we need to overwrite the functionality of the default Powershell tab completion with our own, and then have that override get loaded into the Powershell prompt when it gets loaded, via a custom Profile.ps1 file.

To get this working, the first step is to look at what the default tab completion does.

(Get-Command 'TabExpansion2').ScriptBlock

This will spit out the code that handles the tab completion behavior.  To get our custom behavior we need to override the original code with our own logic, which I have below (I wish I came up with this myself but alas).  This is note the full code, just the custom logic.  The full script is posted below.

$field = [System.Management.Automation.CompletionResult].GetField('completionText', 'Instance, NonPublic')
$source.CompletionMatches | % {
        If ($_.ResultType -eq 'Command' -and [io.file]::Exists($_.ToolTip)) {
            $field.SetValue($_, [io.path]::GetFileNameWithoutExtension($_.CompletionText))
        }
    }
Return $source

The code looks a little bit intimidating but is basically just looking to see if the command is executable and on our system path, and if it is just strips out the extension.

So to get this all working, we need to create a file with the logic, and have Powershell read it at load time.  Go ahead and paste the following code into a file like no_ext_tabs.ps1.  I place this in the Powershell path (~/Documents/WindowsPowerShell), but you can put it anywhere.

Function TabExpansion2 {
    [CmdletBinding(DefaultParameterSetName = 'ScriptInputSet')]
    Param(
        [Parameter(ParameterSetName = 'ScriptInputSet', Mandatory = $true, Position = 0)]
        [string] $inputScript,

        [Parameter(ParameterSetName = 'ScriptInputSet', Mandatory = $true, Position = 1)]
        [int] $cursorColumn,

        [Parameter(ParameterSetName = 'AstInputSet', Mandatory = $true, Position = 0)]
        [System.Management.Automation.Language.Ast] $ast,

        [Parameter(ParameterSetName = 'AstInputSet', Mandatory = $true, Position = 1)]
        [System.Management.Automation.Language.Token[]] $tokens,

        [Parameter(ParameterSetName = 'AstInputSet', Mandatory = $true, Position = 2)]
        [System.Management.Automation.Language.IScriptPosition] $positionOfCursor,

        [Parameter(ParameterSetName = 'ScriptInputSet', Position = 2)]
        [Parameter(ParameterSetName = 'AstInputSet', Position = 3)]
        [Hashtable] $options = $null
    )

    End
    {
        $source = $null
        if ($psCmdlet.ParameterSetName -eq 'ScriptInputSet')
        {
            $source = [System.Management.Automation.CommandCompletion]::CompleteInput(
                <#inputScript#>  $inputScript,
                <#cursorColumn#> $cursorColumn,
                <#options#>      $options)
        }
        else
        {
            $source = [System.Management.Automation.CommandCompletion]::CompleteInput(
                <#ast#>              $ast,
                <#tokens#>           $tokens,
                <#positionOfCursor#> $positionOfCursor,
                <#options#>          $options)
        }
        $field = [System.Management.Automation.CompletionResult].GetField('completionText', 'Instance, NonPublic')
        $source.CompletionMatches | % {
            If ($_.ResultType -eq 'Command' -and [io.file]::Exists($_.ToolTip)) {
                $field.SetValue($_, [io.path]::GetFileNameWithoutExtension($_.CompletionText))
            }
        }
        Return $source
    }    
}

To start using this tab completion override file right away, just source the file as below and it should start working right away.

. .\no_ext_tabs.ps1

If you want the extensions to be hidden every time you start a new Powershell session we just need to create a new Powershell profile (more reading on creating Powershell profiles here if you’re interested) and have it load our script. If you already have a custom profile you can skip this step.

New-Item -path $profile -type file -force

After you create the profile go ahead and edit it by adding the following configuration.

# Dot source not_ext_tabs to remove file extensions from executables in path
. C:\Users\jmreicha\Documents\WindowsPowerShell\no_ext_tabs.ps1

Close your shell and open it again and you should no longer see the file extensions.

There is one last little, unrelated tidbit that I discovered through this process but thought was pretty handy and worth sharing with other Powershell N00bs.

Powershell 3 and above provides some nice key bindings for jumping around the CLI, similar to a bash based shell if you are familiar or have a background using *nix systems.

Powershell key shortcuts

You can check the full list of these key bindings by typing ctrl+alt+shift+? in your Powershell prompt (thanks Keith Hill for this trick).

Read More

Backing up Jenkins configurations to S3

If you have worked with Jenkins for any extended length of time you quickly realize that the Jenkins server configurations can become complicated.  If the server ever breaks and you don’t have a good backup of all the configuration files, it can be extremely painful to recreate all of the jobs that you have configured.  And most recently if you have started using the Jenkins workflow libraries, all of your custom scripts and coding will disappear if you don’t back it up.

Luckily, backing up your Jenkins job configurations is a fairly simple and straight forward process.  Today I will cover one quick and dirty way to backup configs using a Jenkins job.

There are some AWS plugins that will backup your Jenkins configurations but I found that it was just as easy to write a little bit of bash to do the backup, especially since I wanted to backup to S3, which none of the plugins I looked at handle.  In genereal, the plugins I looked at either felt a little bit too heavy for what I was trying to accomplish or didn’t offer the functionality I was looking for.

If you are still interested in using a plugin, here are a few to check out:

Keep reading if none of the above plugins look like a good fit.

The first step is to install the needed dependencies on your Jenkins server.  For the backup method that I will be covering, the only tools that need to be installed are aws cli, tar and rsync.  Tar and rsync should already be installed and to get the aws cli you can download and install it with pip, from the Jenkins server that has the configurations you want to back up.

pip install awscli

After the prerequisites have been installed, you will need to create your Jenkins job.  Click New Item -> Freestyle and input a name for the new job.

jenkins job name

Then you will need to configure the job.

The first step will be figuring out how often you want to run this backup.  A simple strategy would be to backup once a day.  The once per day strategy is illustrated below.

backup periodically

Note the ‘H’ above means to randomize when the job runs over the hour so that if other jobs were configured they would try to space out the load.

The next step is to backup the Jenkins files.  The logic is all written in bash so if you are familiar it should be easy to follow along.

# Delete all files in the workspace
rm -rf *

# Create a directory for the job definitions
mkdir -p $BUILD_ID/jobs

# Copy global configuration files into the workspace
cp $JENKINS_HOME/*.xml $BUILD_ID/

# Copy keys and secrets into the workspace
cp $JENKINS_HOME/identity.key.enc $BUILD_ID/
cp $JENKINS_HOME/secret.key $BUILD_ID/
cp $JENKINS_HOME/secret.key.not-so-secret $BUILD_ID/
cp -r $JENKINS_HOME/secrets $BUILD_ID/

# Copy user configuration files into the workspace
cp -r $JENKINS_HOME/users $BUILD_ID/

# Copy custom Pipeline workflow libraries
cp -r $JENKINS_HOME/workflow-libs $BUILD_ID

# Copy job definitions into the workspace
rsync -am --include='config.xml' --include='*/' --prune-empty-dirs --exclude='*' $JENKINS_HOME/jobs/ $BUILD_ID/jobs/

# Create an archive from all copied files (since the S3 plugin cannot copy folders recursively)
tar czf jenkins-configuration.tar.gz $BUILD_ID/

# Remove the directory so only the tar.gz gets copied to S3
rm -rf $BUILD_ID

Note that I am not backing up the job history because the history isn’t important for my uses.  If the history IS important, make sure to add a line to backup those locations.  Likewise, feel free to modify and/or update anything else in the script if it suits your needs any better.

The last step is to copy the backup to another location.  This is why we installed aws cli earlier.  So here I am just uploading the tar file to an S3 bucket, which is versioned (look up how to configure bucket versioning if you’re not familiar).

export AWS_DEFAULT_REGION="xxx"
export AWS_ACCESS_KEY_ID="xxx"
export AWS_SECRET_ACCESS_KEY="xxx"

# Upload archive to S3
echo "Uploading archive to S3"
aws s3 cp jenkins-configuration.tar.gz s3://<bucket>/jenkins-backup/

# Remove tar.gz after it gets uploaded to S3
rm -rf jenkins-configuration.tar.gz

Replace the AWS_DEFAULT_REGION with the region where the bucket lives (typically us-east-1), make sure to update the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to use an account with access to write to AWS S3 (not covered here).  The final thing to note, <bucket> should be replaced to use your own bucket.

The backup process itself is usually pretty fast unless the Jenkins server has a massive amount of jobs and configurations.  Once you have configured the job, feel free to run it once to test if it works.  If the job worked and returns as completed, go check your S3 bucket and make sure the tar.gz file was uploaded.  If you are using versioning there should just be one file, and if you choose the “show versions” option you will see something similar to the following.

s3 backup

If everything went okay with your backup and upload to s3 you are done.  Common issues configuring this backup method are choosing the correct AWS bucket, region and credentials.  Also, double check where all of your Jenkins configurations live in case there aren’t in a standard location.

Read More

Generate Certbot certificates with a container

This is a little bit of a follow up post to the origin post about generating certs with the DNS challenge.  I decided to create a little container that can be used to generate a certificate based on the newly renamed dehyrdated script with the extras to make DNS provisioning easy.

A few things have changed in the evolution of Let’s Encrypt and its tooling since the last post was written.  First, some of the tools have been renamed so I’ll just try to clear up some of the names if there is any confusion.  The official Let’s Encrypt client has been renamed to Certbot.  The shell script used to provision the certificates has been renamed as well.  What used to be called letsencrypt.sh has been renamed to dehydrated.

The Docker image can be found here.  The image is essentially the dehydrated script with a few other dependencies to make the DNS challenge work, including Ruby, a ruby script DNS hook and a few Gems that the script relies on.

The following is an example of how to run the script:

docker run -it --rm \
    -v $(pwd):/dehydrated \
    -e AWS_ACCESS_KEY_ID="XXX" \
    -e AWS_SECRET_ACCESS_KEY="XXX" \
    jmreicha/dehydrated-dns --cron --domain test.example.com --hook ./route53.rb --challenge dns-01

Just replace test.example.com with the desired domain.  Make sure that you have the DNS zone added to route53 and make sure the AWS credentials used have the appropriate permissions to read and write records on route53 zone.

The command is essentially the same as the command in the original post but is a lot more convenient to run now because you can specify where on your local system you want to dump the generated certificates to and you can also easily specify/update the AWS credentials.

I’d like to quickly explain the decision to containerize this process.  Obviously the dehydrated tool has been designed and written to be a standalone tool but in order to generate certificates using the DNS challenge requires a few extra tidbits to be added.  Cooking all of the requirements into a container makes the setup portable so it can be easily automated on different environments and flexible so that it can be run in a variety of setups, with different domain names and AWS credentials.  With the container approach, the certs could potentially be dropped out on to a Windows machine running Docker for Windows if desired, for example.

tl;dr This setup may be overkill for some, but it has worked out well for my purposes.  Feel free to give it a try if you want to test out creating Certbot certs with the deyhrdated tool in a container.

Read More

Running containers on Windows

There has been a lot of work lately that has gone into bringing Docker containers to the Windows platform.  Docker has been working closely with Microsoft to bring containers to Windows and just announced the availability of Docker on Windows at the latest ignite conference.   So, in this post we will go from 0 to your first Windows container.

This post covers some details about how to get up and running via the Docker app and also manually with some basic Powershell commands.  If you just want things to work as quickly as possible I would suggest the Docker app method, otherwise if you are interested in learning what is happening behind the scenes, you should try the Powershell method.

The prerequisites are basically Windows 10 Anniversary and its required components; which consist of the Docker app if you want to configure it through its GUI or the Windows container feature, and Hyper-V if you want to configure your environment manually.

Configure via Docker app

This is by far the easier of the two methods.  This recent blog post has very good instructions and installation steps which I will step through in this post, adding a few pieces of info that helped me out when going through the installation and configuration process.

After you install the Win 10 Anniversary update, go grab the latest beta version of the Docker Engine, via the Docker for Windows project.  NOTE: THIS METHOD WILL NOT WORK IF YOU DON’T USE BETA 26 OR LATER.  To check, open your Docker app version by clicking on the tray icon and clicking “About Docker” and make sure it says -beta26 or higher.

about docker

After you go through the installation process, you should be able to run Docker containers.  You should also now have access to other Docker tools, including docker-comopse and docker-machine.  To test that things are working run the following command.

docker run hello-world

If the run command worked you are most of the way there.  By default, the Docker engine will be configured to use the Linux based VM to drive its containers.  If you run “docker version” you can see that your Docker server (daemon) is using Linux.

docker version

In order to get things working via Windows, select the option “Switch to Windows containers” in the Docker tray icon.

switch to windows containers

Now run “docker version” again and check what Server architecture is being used.

docker version

As you can see, your system should now be configured to use Windows containers.  Now you can try pulling a Windows based container.

docker pull microsoft/nanoserver

If the pull worked, you are are all set.  There’s a lot going on behind the scenes that the Docker app abstracts but if you want to try enabling Windows support yourself manually, see the instructions below.

Configure with Powershell

If you want to try out Windows native containers without the latest Docker beta check out this guide.  The basic steps are to:

  • Enable the Windows container feature
  • Enable the Hyper-V feature
  • Install Docker client and server

To enable the Windows container feature from the CLI, run the following command from and elevated (admin) Powershell prompt.

Enable-WindowsOptionalFeature -Online -FeatureName containers -All

To enable the Hyper-V feature from the CLI, run the following command from the same elevated prompt.

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All

After you enable Hyper-V you will need to reboot your machine. From the command line the command is “Restart-Computer -Force”.

After the reboot, you will need to either install the Docker engine manually, or just use the Docker app.  Since I have already demonstrated the Docker app method above, here we will just install the Docker engine.  It’s also worth mentioning that if you are using the Docker app method or have used it previously, these commands have been run already so the features should be turned on already, simplifying the process.

The following will download the engine.

Invoke-WebRequest "https://master.dockerproject.org/windows/amd64/docker-1.13.0-dev.zip" -OutFile "$env:TEMP\docker-1.13.0-dev.zip" -UseBasicParsing

Expand the zip into the Program Files path.

Expand-Archive -Path "$env:TEMP\docker-1.13.0-dev.zip" -DestinationPath $env:ProgramFiles

Add the Docker engine to the path.

[Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:\Program Files\Docker", [EnvironmentVariableTarget]::Machine)

Set up Docker to be run as a service.

dockerd --register-service

Finally, start the service.

Start-Service Docker

Then you can try pulling your docker image, as above.

docker pull microsoft/nanoserver

There are some drawback to this method, especially in a dev based environment.

The Powershell method involves a lot of manual effort, especially on a local machine where you just want to test things out quickly.  Obviously the install/config process could be scripted out but that solution isn’t idea for most users.  Another drawback is that you have to manually manage which version of Docker is installed, this method does not update the version automatically.  Using a managed app also installs and manages versions of the other Docker productivity tools, like compose and machine, that make interacting with and managing containers a lot easier.

I can see the Powershell installation method being leveraged in a configuration management scenario or where a specific version of Docker should be deployed on a server.  Servers typically don’t need the other tools and should be pinned at specific version numbers to avoid instability issues and to make sure there aren’t other programs that could potentially cause issues.

While the Docker app is still in beta and the Windows container management component of it is still new, I would still definitely recommend it as a solution.  The app is still in beta but I haven’t had any issues with it yet, outside of a few edge cases and it just makes the Docker experience so much smoother, especially for devs and other folks that are new to Docker who don’t want to muck around the system.

Check out the Docker for Windows forums if you run into any issues.

Read More