Intro to Hyperterm

If you haven’t heard about it yet, Hyperterm (not to be confused with hyperterminal) is a cool new project that brings javascript to the terminal.  Basically, Hyperterm allows for a wide variety of customization and extension to be added to the terminal, yet doesn’t add extra bloat and keeps things fast.  For those who don’t know, Hyperterm is based on the electron project which leverages nodejs to build desktop applications that are cross platform.

At its simplest, Hyperterm is a drop in replacement for other shells, like iterm2 or the default terminal app that comes packaged with most OS’s.  Since Hyperterm is built on top of node (via Electron) it is by default cross platform so works on  Mac and Linux and Windows soon.  Obviously this is a win because you can port your configuration to different platforms and don’t need to reconfigure anything, and can also store your configuration in source control so that if your machine ever dies or you get a new one, you have a nice place to pick things up again, which is pretty slick.

If you know javascript, you can already start hacking on the look and feel of Hyperterm, the Chromium browser tools are literally built into it (cmd+option+i).


To get started, head over to the official Hyperterm website and download the latest release.

Once that is done and you go through the installation process you are ready to get started.  Just fire up Hyperterm and you are good to go.


The stock Hyperterm is definitely usable.  The real power though, comes from the flexibility and design of the plugin system and configuration files which makes customization really easy to get going with and really powerful.


Hyperterm uses its own configuration file to extend the basic functionality.  The docs are a great resource for learning more about customization and configuration.

The process of changing themes or adding additional functionality is pretty straight forward.  All the plugins that Hyperterm uses are just npm modules, so can be installed and managed via npm.  So for example, to change the default theme, you would open up your ~/.hyperterm.js file.

Look for the “plugins” section.

plugins: [],

Add the desired plugin.

plugins: [

And then reload hyperterm to pick up the new configuration by pressing (Cmd+Shift+R) or by clicking View -> Reload.  You should notice the new theme right away.  A nice status line should show up at the bottom of the terminal because of the ‘hyperline’ package, and there was practically no time spent enabling the functionality, which is a big win in my opinion.

For more ideas, definitely go check out the awesome-hyperterm project.  This repo is a great place to find out more about hyperterm and other cool projects that are related.  The official docs are also a great resource for getting started as well as finding some ideas.

Finally, you can also run,

npm search hyperterm

To get a full listing of npm projects with hyperterm in their name for even more ideas.  Outside of the plugins, you can easily hack on the configuration file itself to test out how things work.  Again, the config is just javascript so if you know JS it is easy to get started modifying things.

Additionally, you can tweak the configuration by hand to customize things like font sizes, colors, cursor, etc. without having to install or use any plugins.  The process to customize these values is similar to installing plugins, just pop open the ~/.hypertem.js file, make any adjustments, then reload the terminal and you should be good to go.


The Hyperterm project is still very new but it is already capable of being the default terminal.  As the project grows in popularity, there will be more and more options for customization and the terminal itself will continue to improve.  It is exciting to see something new in the terminal emulator space because there are so few options.  It will be cool to see what new developments are in the works for the project.

It is definitely hard to adjust to something new but it is also good to get out of your comfort zone sometimes as well.  There are lots of things to poke around at and plugins to try out with Hyperterm.

I can’t remember the last time I had this much fun when I was fiddling around with terminal settings.  So at the very least, if you don’t switch full time to Hyperterm, give it a try and see if it is a good fit.

Read More

My take on the NoOps movement

I recently attended DevOps Days Portland, where Kelsey Hightower gave a nice Keynote about NoOps.  I had heard of the terms NoOps in passing before the conference but never really thought much about it or its implications. Kelsey’s talk started to get me thinking more and more about the idea and what it means to the DevOps world.

For those of you who aren’t familiar, NoOps is a newer tech buzzword that has emerged to describe the concept that an IT environment can become so automated and abstracted from the underlying infrastructure that there is no need for a dedicated team to manage software in-house.

Obviously the term NoOps has caused some friction between the development world and operations/DevOps world because of its perceived meaning along with a very controversial article entitled “I Don’t Want DevOps.  I Want NoOps.” that kicked the whole movement off and sparked the original debate back in 2011.  The main argument from people who work in operations is that there will always be servers running somewhere, as a developer you can’t just magically make servers go away, which I agree with 100%.  It is incredibly short sighted to assume that any environment can work in a way where operations in some form need not exist.

Interestingly though, if you dig into the goals and underlying meaning of NoOps, they are actually fairly reasonable to me when boiled down.  Here are just a few of them, borrowed from the article and Kelsey’s talk:

  • Improve the process of deploying apps
  • Not just VM’s, release management as well
  • Developers don’t want to deal with operations
  • Developers don’t care about hardware

All of these goals seem reasonable to me as an operations person, especially not having to work with developers.  Therefore, when I look at NoOps I don’t necessarily take the ACTUAL underlying meaning of it be to work against operations and DevOps, I look at it as developers trying to find a better way to get their jobs done, however misguided their wording and mindset.  I also see NoOps, from an operations perspective as a shift in the mindset of how to accomplish goals, to improve processes and pipelines, which is something that is very familiar to people who have worked in DevOps.

Because of this perspective, I see an evolution in the way that operations and DevOps works that takes the best ideas from NoOps and applies them in practical ways.  Ultimately, operations people want to be just as productive as developers and NoOps seems like a good set of ideas to get on the same page.

To be able to incorporate ideas from NoOps as cloud and distributed technologies continue to advance, operations folks need to embrace the idea of programming and automation in areas that have been traditionally managed manually as part of the day to day by operation folks in order to abstract away complicated infrastructure and make it easier for developers to accomplish their goals. Examples of these types of things may include automatically provisioning networks and VLAN’s or issuing and deploying certificates by clicking a button.  As more of the infrastructure gets abstracted away, it is important for operations to be able to automate these tasks.

If anything, I think NoOps makes sense as a concept for improving the lives of both developers and operations, which is one facet that DevOps aims to help solve.  So to me, the goals of NoOps are a good thing, even though there has been a lot of stigma about it.  Just to reiterate, I think it is absurd for anybody to say that jobs of operations will going away anytime soon, the job and responsibilities are just evolving to fit the direction other areas of the business are moving.  If anything, the skills of managing cloud infrastructure, automation and building robust systems will be in higher demand.

As an operations/DevOps person just remember to stay curious and always keep working on improving your skill set.

Read More

Setting up a Jenkins 2.0 pipeline


The new Jenkins pipeline integration in 2.0 is pretty awesome and is a great way to add more automation to Jenkins.In this post we will set up Jenkins so that we can write our own custom libraries for Jenkins to use with the pipeline.

One huge benefit of the new pipeline integration in to the core components of Jenkins is the idea of a Jenkinsfile, which enables a way to automatically execute Jenkins jobs on the repo automatically, similar to the way TravisCI works.  Simply place a Jenkinsfile in the root of the repo that you wish to automate, set up your webhook so that events to GitHub automatically trigger the Jenkins job and let Jenkins take care of the build.

There are a few very good guides for getting this functionality setup.

Unfortunately, while these guides/docs are very informative and useful they are somewhat confusing to new users and glaze over a few important steps and details that took me longer than it should have to figure out and understand.  Therefore the focus will be on some of the details that the mentioned guides lack, especially setting up the built in Jenkins Git repo for accessing and working with the custom pipeline libraries.


There are many great resources out there already for getting a Jenkins server up and running.  For the purposes of this post it will be assumed that you have already create and setup a Jenkins instance, using one of the other Digital Ocean [tutorials](

The first step to getting the pipeline set up, again assuming you have already installed Jenkins 2.0 from one of the guides linked above, is to enable the Jenkins custom Git repo for housing unique pipeline libraries that we will write a little bit later on.  There is a section in the workflow plugin tutorial that explains it, but is one of the confusing areas and is buried in the documentation so it will be covered in more detailed below.

In order to be able to clone, read and write to the built in Jenkins Git repo, you will need to add your SSH public key to the server.  To do this, the first step will be to configure the server per the SSH plugin and configuring a port to connect to so that the repo can be cloned.  This can be found in Jenkins -> Configuration.  In this example I used port 2222.

ssh port configuration

As always security should be a concern, so make sure you have authentication turned on.  In this example, I am using GitHub authentication but the process will be similar for other authentication methods.  Also make sure you use best practices in locking down any external access by hardening SSH or using other ways of blocking access.

After the Git server has been configured, you will need to add the public key for your user in Jenkins.  This was another one of the confusing parts initially.   The section to add your personal SSH public key was buried in docs that were glossed over, the page can be found here.  The location in Jenkins to add this key is located here,<username>/configure


public ssh key

If you don’t know where your public SSH key is located you can get it from the following command.

cat ~/.ssh/

Just copy that certificate and paste it into The Public SSH Keys in the Jenkins menu.

Now that you have the built in Jenkins Git repo configured you can clone it via the instructions from the above tutorial.

git clone ssh://<jenkins_username>

Notice that we are using port 2222.  This is the port we configured via the SSH plugin from above.  The port number can be any port you like, just make sure to keep it consistent in the configuration and here.

Working with the pipeline

With the pipeline library repo cloned, we can write a simple function, and then add it back in to Jenkins.  By default the repo is called workflowLibs.  The basic structure of the repo is to place your custom functions is the vars directory.  Within the vars directory you will create a .groovy file for the function and a matching .txt file any documentation of the command you want to add.  In our example let’s create a hello function.

Create a hello.groovy and a hello.txt file.  Inside the hello.groovy file, add something similar to the following.

echo ‘Hello world!’

Go ahead and commit the hello.groovy file, don’t worry about the hello.txt file.

git add hello.groovy

git commit -m “Hello world example”

git push (You may need to set your upstream)

Obviously this function won’t do much.  Once you have pushed the new function you should be able to view it in the dropdown of pipeline functions in the Jenkins build configuration screen.

NOTE:  There is a nice little script testing plugin built in as part of the pipeline now called snippet-generator.  If you want to test out small scripts on your Jenkins server first before committing and pushing any local changes you can try out your script first in Jenkins.  To do this open up any of your Jenkins job configurations and then click the link that says “Pipeline syntax” towards the bottom of the page.

Pipeline configuration

Here’s what the snippet generator looks like.

snippet generator

From the snippet-generator you can build out quite a bit of the functionality that you might want to use as part of your pipeline.

Configuring the job

There are a few different options for setting up Jenkins pipelines.  The most common types are the Pipeline or the Multibranch Pipeline.  The Pipeline implements all of the scripting capabilities that have been covered so that the power of the Groovy scripting language can be leveraged in the Jenkins job as well as things like application life cycles (via stages) which makes CI/CD much easier.

The Multibranch Pipeline augments the functionality of the Pipeline by adding in the ability to index branches from SCM so that different branches can be easily built and tested.  The Multibranch Pipeline is especially useful for larger projects that have many developers and feature branches being worked on because each one of the branches can be tracked separately so developers don’t need to worry as much about overlapping with others work.

Taking the pipeline functionality one step further, it is possible to create a Jenkinsfile with all of the needed pipeline code inside of a repo that so that it can be built automatically.  The Jenkinsfile basically is used as a blueprint used to describe the how and what of a project for its build process and can leverage any custom groovy functions that you have written that are on the Jenkins server.

Using a the combination of a GitHub webhook and a Jenkinsfile in a Git repo it is easy to automatically tell your Jenkins server to kick off a build every time a commit or PR happens in GitHub.

Let’s take a look at what an example Jenkinsfile might look like.

node {

stage ‘Checkout’

// Checkout logic goes here

stage ‘Build’

// Build logic goes here

stage ‘Test’

// Test logic goes here

stage ‘Deploy’

// Deploy logic goes here


This Jenkinsfile defines various “stages”, which will run through a set of functions described in each stage every time a commit has been pushed or a PR has been opened for a given project.  One workflow, as shown above, is to segment the job into build, test, push and deploy stages.  Different projects might require different stages so it is nice to have granular control of what the job is doing on a per repo basis.

Bonus: Github Webhooks

Configuring webhooks in GitHub is pretty easy as well.  SCM is fairly standard these days for storing source code and there are a number of different Git management tools so the steps should be very similar if using a tool other than GitHub.  Setting up a webhook in GitHub can be configured to trigger a Jenkins pipeline build when either a commit is pushed to a branch, like master, or a PR is created for a branch.  The advantages of using webhooks should be pretty apparent, as builds are created automatically and can be configured to output their results to various different communication channels like email or Slack or a number of other chat tools.  The webhooks are the last step in automating the new pipeline features.

If you haven’t already, you will need to enable the GitHub plugin ( in order to use the GitHub webhooks.  No extra configuration should be needed out of the box after installing the plugin.

To configure the webhook, first make sure there is a Jenkinsfile in the root directory of the project.  After the Jenkinsfile is in place you will need to set up the webhook.  Navigate to the project settings that you would like to create a webhook for, select ‘Settings’ -> ‘Webhooks & services’ .  From here there is a button to add a new webhook.

adding a webhook

Change the Payload URL to point at the Jenkins server, update the Content type to application/x-www-form-urlencoded, and leave the secret section blank.  All the other defaults should be fine.

After adding the webhook, create or update the associated job in Jenkins.  Make sure the new job is configured as either a pipeline or multibranch pipeline type.

pipeline job

In the job configuration point Jenkins at the GitHub URL of the project.

configure the job

Also make sure to select the build trigger to ‘Build when a change is pushed to GitHub’.

more configuration

You may need to configure credentials if you are using private GitHub repos.  This step can be done in Jenkins by navigating to ‘Manage Jenkins’ -> ‘Credentials’ -> ‘Global’.  Then Choose ‘Add Credentials’ and select the SSH key used in conjunction with GitHub.  After the credentials have been set up there should be an option when configuring jobs to use the SSH key to authenticate with GitHub.


Writing the Jenkinsfiles and custom libraries can take a little bit of time initially to get the hang of but are very powerful.  If you already have experience writing Groovy, then writing these functions and files should be fairly straight forward.

The move towards pipelines brings about a number of nice features.  First, you can keep track of your Jenkins job definition simply by adding a Jenkinsfile to a repo, so you get all of the nice benefits of history and version tracking and one central place to keep your build configurations.  Because groovy is such a flexible language, pipelines give developers and engineers more options and creativity in terms of what their build jobs can do.

One gotcha of this process is that there isn’t a great workflow yet for working with the library functions, so there is a lot of trial and error to get custom functionality working correctly.  One good way to debug is to set up a test job and watch for errors in the console output when you trigger a build for it.  The combination of the snippet generator script tester though this process has become much easier.

Another thing that can be tricky is the Groovy sandbox.  It is mostly and annoyance and I would not suggest turning it off, just be aware that it exists and often times needs to be worked around.

There are many more features and things that you can do with the Pipeline so I encourage readers to go out and explore some of the possibilities, the docs linked to above are a good place for exploring many of these features.  As the pipeline matures, more and more plugins are adding the ability to be configured via the pipeline workflow, so if something isn’t possible right now it probably will be very soon.

Happy Jenkins pipelining!

Read More

Lint your Dockerfiles with Hadolint

If you haven’t already gotten in to the habit of linting your Dockerfiles you should.  Code linting is a common practice in software development which helps find, identify and eliminate issues and bugs before they are ever able to become a problem.  One of the main benefits of linting your code is that it helps identify and eliminate nasty little bugs before they ever have a chance to become a problem.

Taking the concept of linting and appyling it to Dockerfiles gives you a quick and easy way to identify errors quickly before you ever build any Docker images.  By running your Dockerfile through a linter first, you can ensure that there aren’t any structural problems with the logic and instructions specified in your Dockerfiles.  Linting your files is fast and easy (as I will demonstrate) and getting in the habit of adding a linting step to your development workflow is often very useful because not only will the linter help identify hidden issues which you might not otherwise catch right away but it can potentially save hours of troubleshoot later on, so there is some pretty good effort to benefit ratio there.

There are serveral other Docker linting tools around:

But in my experience, these tools have either been overly complicated, don’t detect/catch as many errors and in general just don’t seem to work as well or have as much polish as Hadolint.  I may just have a skewed perspective of these tools but this was my experience when I tried them, so take my evaluation with a grain of salt.  Definitely let me know if you have experience with any of these tools, I may just need to revisit them to get a better perspective.

With that being said, Hadolint offers everything I need and is very easy to use and get started with and makes linting your Dockerfiles is trivially easy, which counts for the most points in my experience.  Another bonus of Hadolint is that the project is fairly active and the author is friendly, so if there are things you’d like to see get added, it shouldn’t be too hard to get some movement on.  You can check out the project on Github for details about how to install and run Hadolint.

Below, I will go over how to get setup and started as well as some basic usage.


If you use Mac OS X there is a brew formula for installing Hadolint.

brew update
brew install hadolint

If you are a Windows user, for now you will have to run Hadolint from within a Docker conainer.

docker run --rm -i lukasmartinelli/hadolint < Dockerfile

If you feel comfortable with the source code you can try building the code locally.  I haven’t attempted that method, so I don’t have instructions here for how to do it.  Go check out the project if you are interested.


Hadolint helps you find syntax errors and other mistakes that you may not notice in your Dockerfiles otherwise.  It’s easy to get started.  To run Hadolint run the following.

hadolint Dockerfile

If there are any issues, Hadolint will print out the rule number as well as a blurb describing what could potentially be wrong.

DL4000 Specify a maintainer of the Dockerfile
L1 DL3007 Using latest is prone to errors if the image will ever update. Pin the version explicitly to a release tag.
L3 DL3013 Pin versions in pip. Instead of `pip install <package>` use `pip install <package>==<version>`

As with any linting tool, you will definitely get some false positives at some point so just be aware of items that can potentially be ignored.  There is an issue open on Github right now to allow Hadolint to ignore certain rules, which will help eliminate some of the false positives.  For example, in the above snippet, we don’t necessarily care about the maintainer missing so it will be nice to be able to ignore that line.

Here is a complete reference for the all of the linting rules.  The wiki gives examples of what is wrong and how to fix things, which is very helpful.  Additionally, the author is welcoming ideas for additional things to check, so if you have a good idea for a linting rule open up an issue.

That’s it for now.  Good luck and happy Docker linting!

Read More

Useful Vim Plugins

This post is mostly a reference for folks that are interested in adding a little bit of extra polish and functionality to the stock version of Vim.  The plugin system in Vim is a little bit confusing at first but is really powerful once you get past the initial learning curve.  I know this topic has been covered a million times but having a centralized reference for how to set up each plugin is a little bit harder to find.

Below I have highlighted a sample list of my favorite Vim plugins.  I suggest that you go try as many plugins that you can to figure out what suits your needs and workflow best.  The following plugins are the most useful to me, but certainly I don’t think will be the best for everybody so use this post as a reference to getting started with plugins and try some out to decide which ones are the best for your own environment.


This is a package manager of sorts for Vim plugins.  Vundle allows you to download, install, search and otherwise manage plugins for Vim in an easy and straight forward way.

To get started with Vundle, put the following configuration at THE VERY TOP of your vimrc.

set nocompatible              " be iMproved, required
filetype off                  " required
set rtp+=~/.vim/bundle/Vundle.vim
call vundle#rc()
"" let Vundle manage Vundle
Bundle 'gmarik/Vundle.vim'

Then you need to clone the Vundle project in to the path specified in the vimrc from above.

git clone ~/.vim/bundle/Vundle.vim

Now you can install any other defined plugins from within Vim by  running :BundleInstall.  This should trigger Vundle to start downloading/updating its list of plugins based on your vimrc.

To install additional plugins, update your vimrc with the plugins you want to install, similar to how Vundle installs itself as shown below.

"" Example plugin
Bundle 'flazz/vim-colorschemes'

Color Schemes

Customizing the look and feel of Vim is a very personal experience.  Luckily there are a lot of options to choose from.

The vim-colorschemes plugin allows you to pick from a huge list of custom color schemes that users have put together and published.  As illustrated above you can simply add the repo to your vimrc to gain access to a large number of color options.  Then to pick one just add the following to your vimrc (after the Bundle command).

colorscheme xoria256

Next time you open up Vim you should see color output for the scheme you like.


Syntastic is a fantastic syntax highlighter and linting tool and is easily the best syntax checker I have found for Vim.  Syntastic offers support for tons of different languages and styles and even offers support for third party syntax checking plugins.

Here is how to install and configure Syntastic using Vundle.  The first step is to ddd Syntastic to your vimrc,

" Syntax highlighting
 Bundle 'scrooloose/syntastic'

There are a few basic settings that also need to get added to your vimrc to get Syntastic to work well.

" Syntastic statusline
 set statusline+=%#warningmsg#
 set statusline+=%{SyntasticStatuslineFlag()}
 set statusline+=%*
 " Sytnastic settings
 let g:syntastic_always_populate_loc_list = 1
 let g:syntastic_auto_loc_list = 1
 let g:syntastic_check_on_open = 1
 let g:syntastic_loc_list_height=5
 let g:syntastic_check_on_wq = 0
 " Better symbols
 let g:syntastic_error_symbol = 'XX'
 let g:syntastic_warning_symbol = '!!'

That’s pretty much it.  Having a syntax highlighter and automatic code linter has been a wonderful boon for productivity.  I have saved  myself so much time chasing down syntax errors and other bad code.  I definitely recommend this tool.


This plugin is an autocompletion tool that adds tab completion to Vim, giving it a really nice IDE feel.  I’ve only tested YCM out for a few weeks now but have to say it doesn’t seem to slow anything down very much at all, which is nice.  An added bonus to using YCM with Syntastic is that they work together so if there are problems with the functions entered by YCM, Syntastic will pick them up.

Here are the installation instructions for Vundle.  The first thing you will need to do is add a Vundle reference to your vimrc.

"" Autocomplete
Bundle 'Valloric/YouCompleteMe'

Then, in Vim, run :BundleInstall – this will download the git repo for YouCompleteMe.  Once the repo is downloaded you will need a few other tools installed to get things working correctly.  Check the official documentation for installation instruction for your system.  On OS X you will need to have Python, cmake, MacVim and clang support.

xcode-select --install
brew install cmake

Then, to install YouCompleteMe.

cd ~/.vim/bundle/YouCompleteMe
git submodule update --init --recursive (not needed if you use Vundle)
./ --clang-completer


Highlights pesky whitespace automatically.  This one is really useful to just have on in the background to help you catch whitespace mistakes.  I know I make a lot of mistakes with regards to missing whitespace so having this is just really nice.

To install it.

"" Whitespace highlighting
Bundle 'ntpeters/vim-better-whitespace'

That’s it.  Vundle should handle the rest.

ctrlp / nerdtree

These tools are useful for file management and traversal.  These plugins become more powersful when you work with a lot of files and move around different directories a lot.  There is some debate about whether or not to use nerdtree in favor of the built in netrw.  Nonetheless, it is still worth checking out different file browsers and see how they work.

Check out Vim Unite for a sort of hybrid file manager for fuzzy finding like ctrlp with additional functionality, like the ability to grep files from within Vim using a mapped key.

Bonus – Shellcheck

This is a shell and bash linting tool that integrates with vim and is great.  Bash is notoriously difficult to read and debug and the shellcheck tools helps out with that a lot.

Install shellcheck on your system and syntastic will automatically pick up the installation and automatically do its linting whenever you save a file.  I have been writing a lot of bash lately and the shellcheck tool has been a godsend for catching mistakes, and especially useful in Vim since it runs all the time.

By combining the powers of a good syntax highlighter and a good solid understanding of Bash you should be able to be that much more productive once you get used to having a build in to syntax and style checker for your scripts.

Read More