awesome-rancher

Awesome lists are a newish phenomenon for organizing links and information that have been gaining popularity on Github.  You can read more about awesome lists here, but essentially they are curated lists of helpful links and other various resources that are put together by members of their communities.

There are lots of awesome lists these days on Github, and there are even awesome lists for non-tech categories, including recipes and video games. Oddly enough though through my own searching, I quickly discovered that there was no awesome list for Rancher.

So I decided to give a little bit back to the Rancher community in my own way.  If you aren’t aren’t familiar with Rancher, it is essentially a platform and orchestration engine for container based workloads.  Out of all of the container management and orchestration platforms that have been growing out of the container popularity explosion, using Rancher has definitely the most pleasant experience and easiest to use.  Also, if you are new to Rancher, please go check out the list on Github for more information, that is what it is there for.

Check out the awesome-rancher project here.

The goal for me and for the project is to make awesome-rancher the go to resource for finding useful information.  Therefore, I want awesome-rancher to be as easy to use and as complete as possible so new users have a good jumping off place when they decide to dive into Rancher.

If you are a Rancher community member or even just notice a key resource missing from the list, please either let me know with a comment here, on twitter/facebook, or just open a PR on the project itself (which is probably the easiest).

Read More

Configure a Rancher HAProxy health check

If you are familiar with ELB/ALB you will know that there are slight idiosyncrasies between the two.  For example, ELB allows you to health check a back end server by TCP port.  Basically allowing the user to check if a back end comes up and is listening on a specified port.  ALB is slightly different in its method for health checking.  ALB uses HTTP checks (layer 7) to ensure back end instances are up and listening.

This becomes a problem in Rancher, when you have multiple stacks in a single environment that are fronted by the Rancher HAProxy load balancer.  By default, the HAProxy config does not have a health check endpoint configured, so ALB is never able to know if the back end server is actually up and listening for requests.

A colleague and I  recently discovered a neat trick for solving this problem if you are fronting your environment with an ALB.  The solution to this conundrum is to sprinkle a little bit of custom configuration to the Rancher HAProxy config.

In Rancher, you can modify the live settings without downtime.  Click on the load balancer that sits behind the ALB and navigate to the Custom haproxy.cfg tab.

haproxy config

Modify the HAProxy config by adding the following:

# Use to report haproxy's status
defaults
    mode http
    monitor-uri /_ping

Click the “Edit” button to apply these changes and you should be all set.

Next, find the health check configuration for the associated ALB in the AWS console and add a check the the /_ping path on port 80 (or whichever port you are exposing/plan to listen on).  It should look similar to the following example.

Health checks

Below is an example that maps a DNS name to an internal Nginx container that is listening for requests on port 80.

HAPRoxy configuration

The check in ALB ensures that the HAProxy load balancer in Rancher is up and running before allowing traffic to be routed to it.  You can verify that your Rancher load balancer is working if the instances behind your ALB start showing a status of healthy in the AWS console.

NOTE: If you don’t have any apps initially behind the Rancher load balancer (or that are listening on the port specified in the health check) the AWS instances behind ALB will remain unhealthy until you add configuration in Rancher for the stacks to be exposed, as pictured above.

After setting up HAProxy, publicly accessible services in private Rancher environments can easily be managed by updating the HAProxy config.  Just add a dns name and a service to link to and HAProxy is able to figure out how and where to route requests to.  To map other services that aren’t listening on port 80, the process is  very similar.  Use the above as a guideline and simply update the target port to whichever port the app is listening on internally.

Read More

Powershell for Linux!

Microsoft has been making a lot of inroads in the Open Source and Linux communities lately.  Linux and Unix purists undoubtedly have been skeptical of this recent shift.  Canonical for example, has caught flak for partnering with Microsoft recently.  But the times are changing, so instead of resenting this progress, I chose to embrace it.  I’ll even admit that I actually like many of the Open Source contributions Microsoft has been making – including a flourishing Github account, as well as an increasingly rich, and cross platform platform set of tools that includes Visual Studio Code, Ubuntu/Bash for Windows, .NET Core and many others.

If you want to take the latest and greatest in Powershell v6 for a spin on a Linux system, I recommend using a Docker container if available.  Otherwise just spin up an Ubuntu (14.04+) VM and you should be ready.  I do not recommend trying out Powershell for any type of workload outside of experimentation, as it is still in alpha for Linux.  The beta v6 release (with Linux support) is around the corner but there is still a lot of ground that needs to be covered to get there.  Since Powershell is Open Source you can follow the progress on Github!

If you use the Docker method, just pull and run the container:

docker run -it --rm ubuntu:16.04 bash

Then add the Microsoft Ubuntu repo:

# apt-transport-https is needed for connecting to the MS repo
apt-get update && apt-get install curl apt-transport-https
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list | tee /etc/apt/sources.list.d/microsoft.list

Update and install Powershell:

sudo apt-get update
sudo apt-get install -y powershell

Finally, start up Powershell:

powershell

If it worked, you should see a message for Powershell and a new command prompt:

# powershell
PowerShell
Copyright (C) 2016 Microsoft Corporation. All rights reserved.

PS />

Congratulations, you now have Powershell running in Linux.  To take it for a spin, try a few commands out.

Write-Host "Hello Wordl!"

This should print out a hello world message.  The Linux release is still in alpha, so there will surely be some discrepancies between Linux and Windows based systems, but the majority of cmdlets should work the same way.  For example, I noticed in my testing that the terminal was very flaky.  Reverse search (ctrl+r) and the Get-History cmdlet worked well, but arrow key scrolling through history did not.

You can even run Powershell on OSX now if you choose to.  I haven’t tried it yet, but is an option for those that are curious.  Needless to say, I am looking for the

Read More

Curl on Windows using a Docker wrapper

Does the Windows built-in version of “curl” confuse or intimidate you?  Maybe you come from a Linux or Unix background, and yearn for some of your favorite go-to tools?  Newer versions of Powershell include a cmdlet for interacting with the web called Invoke-WebRequest, which is useful, but is not a great drop in replacement for those with experience in non Windows environments.  The Powershell cmdlets are a move in the right direction to unifying CLI experiences but there are still many folks that have become attached to curl over the years, including myself.  It is worth noting that a Windows compatible version of curl has existed for a long time, however it has always been a nuisance dealing with the zip file, just as using SSH has always been a hassle on Windows.  It has always been possible to use the *nix equivalent tools, it is just clunky.

I found a low effort solution for adding curl to my Windows CLI flow, that acts as a nice middle ground between learning Invoke-WebRequest and installing curl binaries directly, which I’d like to share.  This alias trick is a simple way to use curl for working with API’s and other various web testing in Windows environments without getting tangled in managing versions, and dealing with vulnerabilities.  Just download the latest Docker image to update curl to the newest version, and don’t worry about its implementation across different systems.

Prerequisites are light.  First, make sure to have the Docker for Windows app installed (stable or beta are both fine) as well as a semi-recent version of Powershell.

Next step.  If you haven’t set up a Powershell profile, there are also lots of links and resources about how to do it.   I even wrote about it recently, so I am skipping that step as well.  Start by adding the following snippet to your Powershell profile (by default located in C:\Users\<user>\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1) and saving.

# Curl alias using docker
function Docker-Curl {
   docker run --rm byrnedo/alpine-curl $args
}

# Aliases
New-Alias dcurl Docker-Curl

Then source you terminal and run the curl command that was just created.

dcurl -h

One issue you might notice from the snippet above is that the Docker image is not an “official” image.  If this bothers you (security concerns, etc.), it is really easy to create your own, secure image.  There are lots of examples of how to create minimal images with Curl pre-installed.  Just be aware that your custom image will need to be maintained and occasionally rebuilt/published to guard against future vulnerabilities.  For brevity, I have skipped this process, but here’s an example of creating a custom image.

Optional

To update curl, just run the docker pull command.

docker pull apline-curl

Now you have the best of both worlds.  The built-in Invoke-WebRequest cmdlet provided by Powershell is available, as well as the venerable curl command.

My number one case for using curl in a container is that it has been in existence for such a long time (less bugs and edge cases) and it can be used for nearly any web related task.  It is also much handier to use curl for those with a background using *nix systems, rather than digging around in unfamiliar Powershell docs for similar functionality.  Having the ability to run some of my favorite tools in an easy, reproducible way on Windows has been a refreshing experience while sliding back into the Windows world.

Read More

Backup Route 53 zones

We have all heard about DNS catastrophes.  I just read about horror story on reddit the other day, where an Azure root DNS zone was accidentally deleted with no backup.  I experienced a similar disaster a few years ago – a simple DNS change managed to knock out internal DNS for an entire domain, which contained hundreds of records.  Reading the post hit close to home, uncovering some of my own past anxiety, so I began poking around for solutions.  Immediately, I noticed that backing up DNS records is usually skipped over as part of the backup process.  Folks just tend to never do it, for whatever reason.

I did discover, though, that backing up DNS is easy.  So I decided to fix the problem.

I wrote a simple shell script that dumps out all Route53 zones for a given AWS account to a json file, and uploads the zones to an S3 bucket.  The script is a handful lines, which is perfect because it doesn’t take much effort to potentially save your bacon.

If you don’t host DNS in AWS, the script can be modified to work for other DNS providers (assuming they have public API’s).

Here’s the script:

#!/usr/bin/env bash

set -e

# Dump route 53 zones to a text file and upload to S3.

BACKUP_DIR=/home/<user>/dns-backup
BACKUP_BUCKET=<bucket>
# Use full paths for cron
CLIPATH="/usr/local/bin"

# Dump all zones to a file and upload to s3
function backup_all_zones () {
  local zones
  # Enumerate all zones
  zones=$($CLIPATH/aws route53 list-hosted-zones | jq -r '.HostedZones[].Id' | sed "s/\/hostedzone\///")
  for zone in $zones; do
  echo "Backing up zone $zone"
  $CLIPATH/aws route53 list-resource-record-sets --hosted-zone-id $zone > $BACKUP_DIR/$zone.json
  done

  # Upload backups to s3
  $CLIPATH/aws s3 cp $BACKUP_DIR s3://$BACKUP_BUCKET --recursive --sse
}

# Create backup directory if it doesn't exist
mkdir -p $BACKUP_DIR
# Backup up all the things
time backup_all_zones

Be sure to update the <user> and <bucket> in the script to match your own environment settings.  Dumping the DNS records to json is nice because it allows for a more programmatic way of working with the data.  This script can be run manually, but is much more useful if run automatically.  Just add the script to a cronjob and schedule it to dump DNS periodically.

For this script to work, the aws cli and jq need to be installed.  The installation is skipped in this post, but is trivial.  Refer to the links for instructions.

The aws cli needs to be configured to use an API key with read access from Route53 and the ability to write to S3.  Details are skipped for this step as well – be sure to consult the AWS documentation on setting up IAM permissions for help with setting up API keys.  Another, simplified approach is to use a pre-existing key with admin credentials (not recommended).

Read More