Quickly get Node.js up and running on Windows

Installing software on Windows in an automatable, repeatable and easy way in Windows has always been painful in the past.  Luckily, in recent years there have been some really nice additions to Windows and its ecosystem that have improved the process significantly.  The main tools that ease this process are Powershell and Chocolatey and these tools have significantly improved the developer  and administrative experiences in Windows.

In the past, in order to install something like a programming language and its environment you would have to manually download the zip or tar file, extract it, put it in the correct place, set up environment variables and system paths manually, etc.  Things would also break pretty easily and it was just painful in general to work with.

Hopefully you are already familiar with Powershell at least because I won’t be covering it much in this post.  If you have any recent version of Windows you should have Powershell.  Below I describe Chocolately a little bit and why it is useful so you can find out more in the post or you can check out the Chocolately website, which does a much better job of explaining its benefits, how it is used and why package managers are good.

Update Windows execution policy

This process is pretty straight forward.  Make sure you open up a Powershell prompt with admin privileges, otherwise you will run into problems.  The first step is to change the default system execution policy (if you haven’t already).  On a fresh install of Windows, you will need to loosen up the security in order to install Chocolatey, which will be used to install and mange Node.js.  Luckily there are just a few Powershell commands that need to run.  To check the status of the execution policy, run the following.

Get-ExecutionPolicy
Restricted

This should tell you what your execution policy is currently set to.  To loosen the policy for Choco, run the following command.

Set-ExecutionPolicy -ExecutionPolicy RemoteSigned

Follow the prompt and choose [Y] to update the policy.  Now, if you run Get-ExecutionPolicy you should see RemoteSigned.

Get-ExecutionPolicy
RemoteSigned

If you don’t have your execution policy opened up to at least RemoteSigned, you will have trouble installing things from the internet, including Chocoloatey.  You can find more information about Execution Policies here if you don’t trust me or just want a better idea of how they work.

Install Chocolatey

If you aren’t familiar, Chocolately is a package manager for Windows, similar to apt-get on Debian Linux systems or yum on Redhat based systems.  It allows users to quickly and easily install and manage software packages on Windows platforms through Powershell.

The steps to installing are Chocolatey are listed below.

iwr https://chocolatey.org/install.ps1 -UseBasicParsing | iex

This command will take care of pretty much all of the setup so just watch it do its thing.  Again, make sure you are inside of an elevated admin shell, otherwise you will likely have problems with the installation.

Install Node.js

The last step (finally) is to install Node.js.  Luckily this is the easiest part.  Just run the following command.

choco install nodejs.install

Choose [Y] to accept that you want to run the install script and let it run.  There should be some colored output and when it is done Node should be installed on your system.  You will need to make sure you close and re-open you Powershell prompt to get the Node binaries to be picked up on your PATH, or just source the shell by running “RefreshEnv” to pick up the new path.  If you are in an admin shell I would recommend dropping out of it by simply closing the current session and opening up a new, non privileged session.

install node

Once you have a fresh shell you can test that Node installed properly.

node -v
v6.6.0

Now you are ready to go.  It only took a few minutes with the Choco package manger.  If you are new to Node in general and are looking for a good resource, the learn you the node project on github is pretty decent.

Let me know if you have any caveats to add to this method, it is the easiest and fastest way I have found to installing Node as well as other pieces of software in Windows without any hassle.

Read More

Intro to Hyperterm

If you haven’t heard about it yet, Hyperterm (not to be confused with hyperterminal) is a cool new project that brings javascript to the terminal.  Basically, Hyperterm allows for a wide variety of customization and extension to be added to the terminal, yet doesn’t add extra bloat and keeps things fast.  For those who don’t know, Hyperterm is based on the electron project which leverages nodejs to build desktop applications that are cross platform.

At its simplest, Hyperterm is a drop in replacement for other shells, like iterm2 or the default terminal app that comes packaged with most OS’s.  Since Hyperterm is built on top of node (via Electron) it is by default cross platform so works on  Mac and Linux and Windows soon.  Obviously this is a win because you can port your configuration to different platforms and don’t need to reconfigure anything, and can also store your configuration in source control so that if your machine ever dies or you get a new one, you have a nice place to pick things up again, which is pretty slick.

If you know javascript, you can already start hacking on the look and feel of Hyperterm, the Chromium browser tools are literally built into it (cmd+option+i).

Installation

To get started, head over to the official Hyperterm website and download the latest release.

Once that is done and you go through the installation process you are ready to get started.  Just fire up Hyperterm and you are good to go.

hyperterm

The stock Hyperterm is definitely usable.  The real power though, comes from the flexibility and design of the plugin system and configuration files which makes customization really easy to get going with and really powerful.

Configuration

Hyperterm uses its own configuration file to extend the basic functionality.  The docs are a great resource for learning more about customization and configuration.

The process of changing themes or adding additional functionality is pretty straight forward.  All the plugins that Hyperterm uses are just npm modules, so can be installed and managed via npm.  So for example, to change the default theme, you would open up your ~/.hyperterm.js file.

Look for the “plugins” section.

plugins: [],

Add the desired plugin.

plugins: [
    'hyperterm-atom-dark',
    'hyperline'
],

And then reload hyperterm to pick up the new configuration by pressing (Cmd+Shift+R) or by clicking View -> Reload.  You should notice the new theme right away.  A nice status line should show up at the bottom of the terminal because of the ‘hyperline’ package, and there was practically no time spent enabling the functionality, which is a big win in my opinion.

For more ideas, definitely go check out the awesome-hyperterm project.  This repo is a great place to find out more about hyperterm and other cool projects that are related.  The official docs are also a great resource for getting started as well as finding some ideas.

Finally, you can also run,

npm search hyperterm

To get a full listing of npm projects with hyperterm in their name for even more ideas.  Outside of the plugins, you can easily hack on the configuration file itself to test out how things work.  Again, the config is just javascript so if you know JS it is easy to get started modifying things.

Additionally, you can tweak the configuration by hand to customize things like font sizes, colors, cursor, etc. without having to install or use any plugins.  The process to customize these values is similar to installing plugins, just pop open the ~/.hypertem.js file, make any adjustments, then reload the terminal and you should be good to go.

Conclusion

The Hyperterm project is still very new but it is already capable of being the default terminal.  As the project grows in popularity, there will be more and more options for customization and the terminal itself will continue to improve.  It is exciting to see something new in the terminal emulator space because there are so few options.  It will be cool to see what new developments are in the works for the project.

It is definitely hard to adjust to something new but it is also good to get out of your comfort zone sometimes as well.  There are lots of things to poke around at and plugins to try out with Hyperterm.

I can’t remember the last time I had this much fun when I was fiddling around with terminal settings.  So at the very least, if you don’t switch full time to Hyperterm, give it a try and see if it is a good fit.

Read More

Generate a Let’s Encrypt certificate using DNS challenge

UPDATE:  The letsencrypt.sh script has been renamed to dehydrated.  Make sure you are using the updated dehydrated script if you are following this guide.

The Let’s Encrypt project has recently unveiled support for the DNS-01 challenge type for issuing certificates and the official Let’s Encrypt project added support with the recent addition of this PR on Github, which enables challenge support  on the server side of things but does not enable the challenge in the client (yet).  This is great news for those that are looking for more flexibility and additional options when creating and manage LE certificates.  For example, if you are not running a web server and rely strictly on having a DNS record to verify domain ownership, this new DNS challenge option is a great alternative.

I am still learning the ins and outs of LE but so far it has been an overwhelmingly positive experience.  I feel like tools like LE will be the future of SSL and certificate creation and management, especially as the ecosystem evolves more in the direction of automation and various industries continue to push for higher levels of security.

One of the big issues with implementing DNS support into a LE client as it currently stands is the large range of public DNS providers that have no standardized API support.  It becomes very difficult to automate the process of issuing and renewing certificates with the lack of standardization and API’s using LE.  The letsencrypt.sh project mentioned below is nice because it has implemented support for a few of the common DNS providers (AWS, CloudFlare, etc.) as hooks which allow the letsencrytpt.sh client to connect to their API’s and create the necessary DNS records.  Additionally, if support for a DNS provider doesn’t exist it is easy to add it by creating your own custom hooks.

letsencrypt.sh is a nice choice because it is flexible and just works.  It is essentially an implementation of the LE client, written in bash.  This is an attractive option because it is well documented, easy to download and use and is also very straight forward.  To use the DNS feature you will need to create a hook, which is responsible for placing the correct challenge in your DNS record.

Here is an example hook that is used for connecting with AWS Route53 for issuing certificates for subdomains.  After downloading the example hook script, you need to run a few commands to get things working.  You can grab it with the following command.

curl -o route53.rb https://gist.githubusercontent.com/tache/3b6760784c098c9139c6/raw/33fe6e0791a7d40ce7cdf14019b7d31801d4ab05/hook.rb
chmod +x route53.rb

You also need to make sure you have the Ruby dependencies installed on your system for the script to work.  The process of installing gems is pretty simple but there was an issue with the version of awesome_print at the time I made this so I had to install a specific version to get things working.  Otherwise, the installation of the other gems was straight forward.  NOTE: These gems are specific to the rout53.rb script.  If you use another hook that doesn’t require these dependencies you can skip the gems installations.

sudo gem install awesome_print -v 1.6.0
sudo gem install aws-sdk
sudo gem install pry
sudo gem install domainatrix

After you install the dependencies, you can run the letsencrypt script .

./letsencrypt.sh

You can see a few different options in this command.

The following command specifies the domain in the command (rather than adding a domains.txt file to reference), the custom hook that we have downloaded, and specifies the type of challenge to use, which is the dns-01 challenge.

./letsencrypt.sh --cron --domain test.example.com --hook ./route53.rb --challenge dns-01

Make sure you have your AWS credentials configured, otherwise the certificate creation will fail.  Here’s what the output of a successful certificate creation might look like.

#
# !! WARNING !! No main config file found, using default config!
#
Processing test.example.com
 + Signing domains...
 + Generating private key...
 + Generating signing request...
 + Requesting challenge for test.example.com...
-------------------->
 Domain: test.example.com
 Root: example.com
 Stage: deploy_challenge
Challenge: yabPBE9YvPXGFjslRtqXh-qK27QlWQgFlTusqcDzUMQ
{
 :change_info => {
 :id => "/change/C3K8MHKLB6IRKZ",
 :status => "PENDING",
 :submitted_at => 2016-08-08 17:54:50 UTC
 }
}
--------------------<
 + Responding to challenge for test.example.com...
-------------------->
 Domain: test.example.com
 Root: example.com
 Stage: clean_challenge
{
 :change_info => {
 :id => "/change/CE90OICFSN00C",
 :status => "PENDING",
 :submitted_at => 2016-08-08 17:55:15 UTC
 }
}
--------------------<
 + Challenge is valid!
 + Requesting certificate...
 + Checking certificate...
 + Done!
 + Creating fullchain.pem...
-------------------->
 Domain: test.example.com
 Root: test.example.com
 Stage: deploy_cert
 Certs: /Users/jmreicha/test/letsencrypt.sh/certs/test.example.com/cert.pem
--------------------<
 + Done!

The entire process of creation and verification should take less than a minute and when it’s done will drop out a certificate for you.

Here is a dump of the commands used to get from 0 to issuing a certficiate with the dns-01 challenge, assuming you already have AWS set up and configured.

git clone https://github.com/lukas2511/letsencrypt.sh.git
cd letsencrypt
curl -o route53.rb https://gist.githubusercontent.com/tache/3b6760784c098c9139c6/raw/33fe6e0791a7d40ce7cdf14019b7d31801d4ab05/hook.rb
chmod +x route53.rb
sudo gem install aws-sdk pry domainitrix awesome_print:1.6.0
./letsencrypt.sh --cron --domain yourdomain.example.com --hook ./route53.rb --challenge dns-01

Conclusion

There are other LE clients out there that are working on implementing DNS support including LEGO and Let’s Encrypt (now called certbot), with more clients adding the additional support and functionality all the time.  I like the letsencrypt.sh script because it is simple, easy to use, and it just works out of the box,with only a few tweaks needed.

As mentioned above, I have a feeling that automated certificates are the future as automation is becoming increasingly more common for these traditionally manual types of administration tasks.  Getting to know how to issue certificates automatically and learning how to use the tooling to create them is a great skill to have for any DevOps or operations person moving forward.

Read More

My take on the NoOps movement

I recently attended DevOps Days Portland, where Kelsey Hightower gave a nice Keynote about NoOps.  I had heard of the terms NoOps in passing before the conference but never really thought much about it or its implications. Kelsey’s talk started to get me thinking more and more about the idea and what it means to the DevOps world.

For those of you who aren’t familiar, NoOps is a newer tech buzzword that has emerged to describe the concept that an IT environment can become so automated and abstracted from the underlying infrastructure that there is no need for a dedicated team to manage software in-house.

Obviously the term NoOps has caused some friction between the development world and operations/DevOps world because of its perceived meaning along with a very controversial article entitled “I Don’t Want DevOps.  I Want NoOps.” that kicked the whole movement off and sparked the original debate back in 2011.  The main argument from people who work in operations is that there will always be servers running somewhere, as a developer you can’t just magically make servers go away, which I agree with 100%.  It is incredibly short sighted to assume that any environment can work in a way where operations in some form need not exist.

Interestingly though, if you dig into the goals and underlying meaning of NoOps, they are actually fairly reasonable to me when boiled down.  Here are just a few of them, borrowed from the article and Kelsey’s talk:

  • Improve the process of deploying apps
  • Not just VM’s, release management as well
  • Developers don’t want to deal with operations
  • Developers don’t care about hardware

All of these goals seem reasonable to me as an operations person, especially not having to work with developers.  Therefore, when I look at NoOps I don’t necessarily take the ACTUAL underlying meaning of it be to work against operations and DevOps, I look at it as developers trying to find a better way to get their jobs done, however misguided their wording and mindset.  I also see NoOps, from an operations perspective as a shift in the mindset of how to accomplish goals, to improve processes and pipelines, which is something that is very familiar to people who have worked in DevOps.

Because of this perspective, I see an evolution in the way that operations and DevOps works that takes the best ideas from NoOps and applies them in practical ways.  Ultimately, operations people want to be just as productive as developers and NoOps seems like a good set of ideas to get on the same page.

To be able to incorporate ideas from NoOps as cloud and distributed technologies continue to advance, operations folks need to embrace the idea of programming and automation in areas that have been traditionally managed manually as part of the day to day by operation folks in order to abstract away complicated infrastructure and make it easier for developers to accomplish their goals. Examples of these types of things may include automatically provisioning networks and VLAN’s or issuing and deploying certificates by clicking a button.  As more of the infrastructure gets abstracted away, it is important for operations to be able to automate these tasks.

If anything, I think NoOps makes sense as a concept for improving the lives of both developers and operations, which is one facet that DevOps aims to help solve.  So to me, the goals of NoOps are a good thing, even though there has been a lot of stigma about it.  Just to reiterate, I think it is absurd for anybody to say that jobs of operations will going away anytime soon, the job and responsibilities are just evolving to fit the direction other areas of the business are moving.  If anything, the skills of managing cloud infrastructure, automation and building robust systems will be in higher demand.

As an operations/DevOps person just remember to stay curious and always keep working on improving your skill set.

Read More

Easy Prometheus Monitoring in Rancher

Docker monitoring and container monitoring in general is an area that has historically been difficult.  There has been a lot of movement and progress in the last year or so to beef up container monitoring tools but in my experience the tools have either been expensive or difficult to configure and complicated to use.  The combination or Rancher and Prometheus has finally given me hope.  Now it is easy easy to setup and configure a distributed monitoring solution without paying a high price.

Prometheus has recently added support for Rancher via the Rancher exporter, which is great news.  This is by far the easiest method I have discovered thus far for experimenting with Prometheus.

For those that don’t know much about Prometheus, it is an up and coming project created by engineers at Soundcloud which is hosted on Github.  Prometheus is focused on monitoring, specifically focusing on container and Docker monitoring.  Prometheus uses a polling based model for “scraping” metrics out of predefined endpoints.  The Prometheus Rancher exporter enables Prometheus to scrape Rancher server specific metrics, which are very useful to have.  To build on that, one other point worth mentioning here is that Prometheus has a very nice, flexible design built upon different client libraries in a similar way to Graphite, so adding support and instrumenting code for different types of platforms is easy to implement.  Check out the list of exporters in the Prometheus docs for idea on how to get started exporting metrics.

This post won’t cover setting up Rancher server or any of the Rancher environment since it is well documented in other places.  I won’t touch on alerting here either because I honestly haven’t had much time to dig into it much yet.  So with that said, the first step I will focus on in this post is getting Prometheus set up and running.  Luckily it is extremely easy to accomplish this using the Rancher catalog and the Prometheus template.

prometheus stack

Once Prometheus has been bootstrapped and everything is up, test it out by navigating to the Grafana home dashboard created by the bootstrap process.  Since this is a simple demo, my dashboard is located at the IP of the server using port 3000 which is the only port that should need to be publicly exposed if you are interested in sharing the Grafana dashboard.

The default Grafana credentials for this catalog template are admin/admin for the username and password, which is noted in the catalog notes found here.  The Prometheus tools ship with some nice preconfigured dashboards, so after you have things set up, it is definitely worth checking out some of them.

grafana dashboard

If you look around the dashboards you will probably notice that metrics for the Rancher server aren’t available by default.  To enable these metrics we need to configure Prometheus to connect to the Rancher API, as noted in the Rancher monitoring guide.

Navigate to http://<SERVER_IP>:8080/v1/settings/graphite.host on your Rancher server, then in the top right click edit, and then update the value there to point to the server address where InfluxDB was deployed to.

influxdb host

After this setting has been configured, restart the Rancher server container, wait a few minutes and then check Grafana.

rancher server metrics

As you can see, metrics are now flowing in the the dashboard.

Now that we have the basics configured, we can drill down in to individual containers to get a more granular view of what is happening in the environment.  This type of granularity is great because it gives a very detailed view of what exactly is going on inside our environment and gives us an easy way to share visuals with other team members.  Prometheus offers a web interface to interact with the query language and visual results, which is useful to help figure out what kinds of things to visualize in Grafana.

Navigate to the server that the Prometheus server container is deployed to on port 9090.  You should see a screen similar to the following.

promdash

There is  documentation about how to get started with using this tool, so I recommend taking a look and playing around with it yourself.  Once you find some useful metrics, visualized in the graph view, grab the query used to generate the graph and add a new dashboard to Grafana.

Prometheus offers a lot of power and flexibility and is a great tool for monitoring.  I am still very new to Prometheus but so far it looks very promising and I have to say I’m really impressed with the amount of polish and detail I was able to get in just an afternoon of experimenting.  I will be updating this post as I get more exposure to Prometheus and get more metrics and monitoring set up so stay tuned.

Read More