Exploring Docker Manifests

As part of my recent project to build an ARM based Kubernetes cluster (more on that in a different post) I have run into quite a few cross platform compatibility issues trying to get containers working in my cluster.

After a little bit of digging, I found that support was added in version 2.2 of the Docker image specification for manifests, which all Docker images to built against different platforms, including arm and arm64.  To add to this, I just recently discovered that in newer versions of Docker, there is a manifest sub-command that you can enable as an experimental feature to allow you to interact with the image manifests.  The manifest command is great for exploring Docker images without having to pull and run and test them locally or fighting with curl to get this information about an image from a Docker registry.

Enable the manifest command in Docker

First, make sure to have a semi recent version of Docker installed, I’m using 18.03.1 in this post.

Edit your docker configuration file, usually located in ~/.docker/config.json.  The following example assumes you have authentication configured, but really the only additional configuration needed is the { “experimental”: “enabled” }.

{
  "experimental": "enabled",
    "auths": {
    "https://index.docker.io/v1/": {
      "auth": "XXX"
    }
  }
}

After adding the experimental configuration to the client you should be able to access the docker manifest commands.

docker manifest -h

To inspect a manifest just provide an image to examine.

docker manifest inspect traefik

This will spit out a bunch of information about the Docker image, including schema, platforms, digests, etc.  which can be useful for finding out which platforms different images support.

{
   "schemaVersion": 2,
   "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
   "manifests": [
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 739,
         "digest": "sha256:36df85f84cb73e6eee07767eaad2b3b4ff3f0a9dcf5e9ca222f1f700cb4abc88",
         "platform": {
            "architecture": "amd64",
            "os": "linux"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 739,
         "digest": "sha256:f98492734ef1d8f78cbcf2037c8b75be77b014496c637e2395a2eacbe91e25bb",
         "platform": {
            "architecture": "arm",
            "os": "linux",
            "variant": "v6"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 739,
         "digest": "sha256:7221080406536c12abc08b7e38e4aebd811747696a10836feb4265d8b2830bc6",
         "platform": {
            "architecture": "arm64",
            "os": "linux",
            "variant": "v8"
         }
      }
   ]
}

As you can see above image (traefik) supports arm and arm64 architectures.  This is a really handy way for determining if an image works across different platforms without having to pull an image and trying to run a command against it to see if it works.  The manifest sub command has some other useful features that allow you to create, annotate and push cross platform images but I won’t go into details here.

Manifest tool

I’d also like to quickly mention the Docker manifest-tool.  This tool is more or less superseded by the built-in Docker manifest command but still works basically the same way, allowing users to inspect, annotate, and push manifests.  The manifest-tool has a few additional features and supports several registries other than Dockerhub, and even has a utility script to see if a given registry supports the Docker v2 API and 2.2 image spec.  It is definitely still a good tool to look at if you are interested in publishing multi platform Docker images.

Downloading the manifest tool is easy as it is distributed as a Go binary.

curl -OL https://github.com/estesp/manifest-tool/releases/download/latest/manifest-tool-linux-amd64
mv manifest-tool-linux-amd64 manifest-tool
chmod +x manifest-tool

One you have the manifest-tool set up you can start usuing it, similar to the manifest inspect command.

./manifest-tool inspect traefik

This will dump out information about the image manifest if it exists.

Name:   traefik (Type: application/vnd.docker.distribution.manifest.list.v2+json)
Digest: sha256:eabb39016917bd43e738fb8bada87be076d4553b5617037922b187c0a656f4a4
 * Contains 3 manifest references:
1    Mfst Type: application/vnd.docker.distribution.manifest.v2+json
1       Digest: sha256:e65103d16ded975f0193c2357ccf1de13ebb5946894d91cf1c76ea23033d0476
1  Mfst Length: 739
1     Platform:
1           -      OS: linux
1           - OS Vers:
1           - OS Feat: []
1           -    Arch: amd64
1           - Variant:
1           - Feature:
1     # Layers: 2
         layer 1: digest = sha256:03732cc4924a93fcbcbed879c4c63aad534a63a64e9919eceddf48d7602407b5
         layer 2: digest = sha256:6023e30b264079307436d6b5d179f0626dde61945e201ef70ab81993d5e7ee15

2    Mfst Type: application/vnd.docker.distribution.manifest.v2+json
2       Digest: sha256:6cb42aa3a9df510b013db2cfc667f100fa54e728c3f78205f7d9f2b1030e30b2
2  Mfst Length: 739
2     Platform:
2           -      OS: linux
2           - OS Vers:
2           - OS Feat: []
2           -    Arch: arm
2           - Variant: v6
2           - Feature:
2     # Layers: 2
         layer 1: digest = sha256:8996ab8c9ae2c6afe7d318a3784c7ba1b1b72d4ae14cf515d4c1490aae91cab0
         layer 2: digest = sha256:ee51eed0bc1f59a26e1d8065820c03f9d7b3239520690b71fea260dfd841fba1

3    Mfst Type: application/vnd.docker.distribution.manifest.v2+json
3       Digest: sha256:e12dd92e9ae06784bd17d81bd8b391ff671c8a4f58abc8f8f662060b39140743
3  Mfst Length: 739
3     Platform:
3           -      OS: linux
3           - OS Vers:
3           - OS Feat: []
3           -    Arch: arm64
3           - Variant: v8
3           - Feature:
3     # Layers: 2
         layer 1: digest = sha256:78fe135ba97a13abc86dbe373975f0d0712d8aa6e540e09824b715a55d7e2ed3
         layer 2: digest = sha256:4c380abe0eadf15052dc9ca02792f1d35e0bd8a2cb1689c7ed60234587e482f0

Likewise, you can annotate and push image manifests using the manifest-tool.  Below is an example command for pushing multiple image architectures.

./manifest-tool --docker-cfg '~/.docker' push from-args --platforms "linux/amd64,linux/arm64" --template jmreicha/example:test --target "jmreicha/example:test"

mquery

I’d also like to touch quickly on the mquery tool.  If you’re only interested in seeing if a Docker image uses manifest as well as high level multi-platform information you can run this tool as a container.

docker run --rm mplatform/mquery traefik

Here’s what the output might look like.  Super simple but useful for quickly getting platform information.

Image: traefik
 * Manifest List: Yes
 * Supported platforms:
   - linux/amd64
   - linux/arm/v6
   - linux/arm64/v8

This can be useful if you don’t need a solution that is quite as heavy as manifest-tool or enabling the built in Docker experimental support.

You will still need to figure out how to build the image for each architecture first before pushing, but having the ability to use one image for all architectures is a really nice feature.

There is work going on in the Docker and Kubernetes communities to start leveraging the features of the 2.2 spec to create multi platform images using a single name.  This will be a great boon for helping to bring ARM adoption to the forefront and will help make the container experience on ARM much better going forward.

Read More

Mount a volume using Ignition and Terraform

Sometimes when provisioning a server you may want to configure and provision storage as part of the bootstrapping and booting process.  For example, the other day I ran into an issue where I needed to define a disk, partition it, mount it to a specified location and then create a few directories in it.  It turned out to be surprisingly not straight forward to provision this storage and I learned quite a few things that I thought were worth sharing.

I’d just like to mention that ignition works like magic.  If you aren’t familiar, Ignition is basically a tool to help provision and configure servers, very similar to cloud-config except by default Ignition only runs once, on first boot.  The magic of Ignition is that it injects itself into initramfs before the OS ever eve boots and manipulating the system.  Ignition can be read in from  remote URL so that it can easily be provisioned in bare metal infrastructures.  There were several pieces to this puzzle.

The first was getting down all of the various ignition configuration components in Terraform.  Nothing was particularly complicated, there was just a lot of trial and error to get everything working.  Terraform has some really nice documentation for working with Ignition configurations, I’d recommend starting there and just playing around to figure out some of the various bits and pieces of configuration that Ignition can do.  There is some documentation on Ignition troubleshooting as well which I found to be helpful when things weren’t working correctly.

Below each portion of the Ignition configuration gets declared inside of a “ignition_config” block.  The Ignition configuration then points towards each invidual component that we want Ignition to configure. e.g. systemd, filesystem, directories, etc.

data "ignition_config" "staging_rancher_host_stateful" {
  systemd = [
     "${data.ignition_systemd_unit.mount_data.id}",
  ]

  filesystems = [
    "${data.ignition_filesystem.data_fs.id}",
  ]

  directories = [
    "${data.ignition_directory.data_dir.id}",
  ]

  disks = [
    "${data.ignition_disk.data_disk.id}",
  ]
}

This part of the setup is pretty straight forward.  Create a data block with the needed ignition configuration to mount the disk to the correct location,  format the device if it hasn’t already been formatted and create the desired directory and then create the Systemd unit to configure the mount point for the OS.  Here’s what each of the data blocks might look like.

data "ignition_filesystem" "data_fs" {
   name = "data"

  mount {
    device = "/dev/xvdb1"
    format = "ext4"
  }
}

data "ignition_directory" "data_dir" {
  filesystem = "data"
  path = "/data"
  uid = 500
  gid = 500
}

data "ignition_disk" "data_disk" {
  device = "/dev/xvdb"

  partition {
    number = 1
    start = 0
    size = 0
  }
}

Next, create the Systemd unit.

data "ignition_systemd_unit" "mount_data" {
  content = "${file("./data.mount")}"
  name = "data.mount"
}

Another challenge was getting the Systemd unit to mount the disk correctly.  I don’t work with Systemd frequently so initially had some trouble figuring this part out.  Basically, Systemd expects the service/unit definition name to EXACTLY match what’s declared inside the “Where” clause of the service definition.

For example, the following configuration needs to be named data.mount because that is what is defined in the service.

[Unit]
Description=Mount /data
Before=local-fs.target

[Mount]
What=/dev/xvdb1
Where=/data
Type=ext4

[Install]
WantedBy=local-fs.target

After all the kinks have been worked out of the Systemd unit(s) and other above Terraform Ignition configuration you should be able to deploy this and have Ignition provision disks for you automatically when the OS comes up.  This can be extended as much as needed for getting initial disks  set up correctly and is a huge step in automating your infrastructure in a nice repeatable way.

There is currently an open issue with Ignition currently where it breaks when attempting to re-provision a previously configured disk on a new machine.  Basically the Ignition process chokes because it sees the device has already been partitioned and formatted and can’t do it again.  I ran into this scenario where I was trying to create a basically floating persistent data EBS volume that gets attached to servers in an autoscaling group and wanted to allow the volume to be able to move around freely if the server gets killed off.

Read More

Bash tricks

bash

Update 2/18/18 – add some handy alt shortcuts

Bash is great.  As I have discovered over the years, Bash contains many different layers, like a good movie or a fine wine.  It is fun to explore and expose these different layers and find uses for them.  As my experience level has increased, I have (slowly) uncovered a number of these features of Bash that make life easier and worked to incorporate them in different ways into my own workflows and use them within my own style.

The great thing about fine arts, Bash included, is that there are so many nuances and for Bash, a huge number of features and uses, which makes the learning process that much more fun.

It does take a lot of time and practice to get used to the syntax and to become effective with these shortcuts.  I use this page as a reference whenever I think of something that sounds like it would be useful and could save time in a script or a command.  At first, it may take more time to look up how to use these shortcuts, but eventually, with practice and drilling will become second nature and become real time savers.

Shell shortcuts

Navigating the Bash shell is easy to do.  But it takes time to learn how to do well.  Below are a number of shortcuts that make the navigation process much more efficient.  I use nearly all of the shortcuts daily (except Ctrl + t and Ctrl + xx, which I only recently discovered).  In a similar vein, I wrote a separate post long ago about setting up CLI shortcuts on iterm that can further augment the capabilities of the CLI.

This is a nice reference with more examples and features

  • Ctrl + a => Return to the start of the command you’re typing
  • Ctrl + e => Go to the end of the command you’re typing
  • Ctrl + u => Cut everything before the cursor to a special clipboard
  • Ctrl + k => Cut everything after the cursor to a special clipboard
  • Ctrl + y => Paste from the special clipboard that Ctrl + u and Ctrl + k save their data to
  • Ctrl + t => Swap the two characters before the cursor (you can actually use this to transport a character from the left to the right, try it!)
  • Ctrl + w => Delete the word / argument left of the cursor
  • Ctrl + l => Clear the screen
  • Ctrl + _ => Undo previous key press
  • Ctrl + xx => Toggle between current position and the start of the line

There are some nice Alt key shortcuts in Linux as well.  You can map the alt key in OSX pretty easily to unlock these shortcuts.

  • Alt + l => Uncapitalize the next word that the cursor is under (If the cursor is in the middle of the the word it will capitalize the last half of the word).
  • Alt + u => Capitalize the word that the cursor is under
  • Alt + t => Swap words or arguments that the cursor is under with the previous
  • Alt + . => Paste the last word of the previous command
  • Alt + b => Move backward one word
  • Alt + f => Move forward one word
  • Alt + r => Undo any changes that have been done to the current command

Argument tricks

Argument tricks can help to grow the navigation capabilities that Bash shortcuts provide and can even further speed up your effectiveness in the terminal.  Below is a list of special arguments that can be passed to any command that can be expanded into various commands.

Repeating

  • !! => Repeat the previous (full) command
  • !foo => Repeat the most recent command that starts with ‘foo‘ (e.g. !ls)
  • !^ => Repeat the first argument of the previous command
  • !$ => Repeat the last argument of the previous command
  • !* => Repeat all arguments of last command
  • !:<number> => Repeat a specifically positioned argument
  • !:1-2 => Repeat a range of arguments

Printing

  • !$:p => Print out the word that !$ would substitute
  • !*:p => Print out the previous command except for the last word
  • !foo:p =>Print out the command that !foo would run

Special parameters

When writing scripts , there are a number of special parameters you can feed into the shell.  This can be convenient for doing lots of different things in scripts.  Part of the fun of writing scripts and automating things is discovering creative ways to fit together the various pieces of the puzzle in elegant ways.  The “special” parameters listed below can be seen as pieces of the puzzle, and can be very powerful building blocks in your scripts.

Here is a full reference from the Bash documentation

  • $* => Expand parameters. Expands to a single word for each parameter separated by IFS delimeter – think spaces
  • [email protected] => Expand parameters. Each parameter expand to a separate word, enclosed by “” –  think arrays
  • $# => Expand the number of parameters of a command
  • $? => Expand the exit status of the previous command
  • $$ => Expand the pid of the shell
  • $! => Expand the pid of the most recent command
  • $0 => Expand the name of the shell or script
  • $_ => Expand the last previous argument

Conclusion

There are some many crevices and cracks of Bash to explore, I keep finding new and interesting things about Bash that lead down new paths and help my skills grow.  I hope some of these tricks give you some ideas that can help and improve your own Bash style and workflows in the future.

Read More

Configure a Rancher HAProxy health check

If you are familiar with ELB/ALB you will know that there are slight idiosyncrasies between the two.  For example, ELB allows you to health check a back end server by TCP port.  Basically allowing the user to check if a back end comes up and is listening on a specified port.  ALB is slightly different in its method for health checking.  ALB uses HTTP checks (layer 7) to ensure back end instances are up and listening.

This becomes a problem in Rancher, when you have multiple stacks in a single environment that are fronted by the Rancher HAProxy load balancer.  By default, the HAProxy config does not have a health check endpoint configured, so ALB is never able to know if the back end server is actually up and listening for requests.

A colleague and I  recently discovered a neat trick for solving this problem if you are fronting your environment with an ALB.  The solution to this conundrum is to sprinkle a little bit of custom configuration to the Rancher HAProxy config.

In Rancher, you can modify the live settings without downtime.  Click on the load balancer that sits behind the ALB and navigate to the Custom haproxy.cfg tab.

haproxy config

Modify the HAProxy config by adding the following:

# Use to report haproxy's status
defaults
    mode http
    monitor-uri /_ping

Click the “Edit” button to apply these changes and you should be all set.

Next, find the health check configuration for the associated ALB in the AWS console and add a check the the /_ping path on port 80 (or whichever port you are exposing/plan to listen on).  It should look similar to the following example.

Health checks

Below is an example that maps a DNS name to an internal Nginx container that is listening for requests on port 80.

HAPRoxy configuration

The check in ALB ensures that the HAProxy load balancer in Rancher is up and running before allowing traffic to be routed to it.  You can verify that your Rancher load balancer is working if the instances behind your ALB start showing a status of healthy in the AWS console.

NOTE: If you don’t have any apps initially behind the Rancher load balancer (or that are listening on the port specified in the health check) the AWS instances behind ALB will remain unhealthy until you add configuration in Rancher for the stacks to be exposed, as pictured above.

After setting up HAProxy, publicly accessible services in private Rancher environments can easily be managed by updating the HAProxy config.  Just add a dns name and a service to link to and HAProxy is able to figure out how and where to route requests to.  To map other services that aren’t listening on port 80, the process is  very similar.  Use the above as a guideline and simply update the target port to whichever port the app is listening on internally.

Read More

Powershell for Linux!

Microsoft has been making a lot of inroads in the Open Source and Linux communities lately.  Linux and Unix purists undoubtedly have been skeptical of this recent shift.  Canonical for example, has caught flak for partnering with Microsoft recently.  But the times are changing, so instead of resenting this progress, I chose to embrace it.  I’ll even admit that I actually like many of the Open Source contributions Microsoft has been making – including a flourishing Github account, as well as an increasingly rich, and cross platform platform set of tools that includes Visual Studio Code, Ubuntu/Bash for Windows, .NET Core and many others.

If you want to take the latest and greatest in Powershell v6 for a spin on a Linux system, I recommend using a Docker container if available.  Otherwise just spin up an Ubuntu (14.04+) VM and you should be ready.  I do not recommend trying out Powershell for any type of workload outside of experimentation, as it is still in alpha for Linux.  The beta v6 release (with Linux support) is around the corner but there is still a lot of ground that needs to be covered to get there.  Since Powershell is Open Source you can follow the progress on Github!

If you use the Docker method, just pull and run the container:

docker run -it --rm ubuntu:16.04 bash

Then add the Microsoft Ubuntu repo:

# apt-transport-https is needed for connecting to the MS repo
apt-get update && apt-get install curl apt-transport-https
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list | tee /etc/apt/sources.list.d/microsoft.list

Update and install Powershell:

sudo apt-get update
sudo apt-get install -y powershell

Finally, start up Powershell:

powershell

If it worked, you should see a message for Powershell and a new command prompt:

# powershell
PowerShell
Copyright (C) 2016 Microsoft Corporation. All rights reserved.

PS />

Congratulations, you now have Powershell running in Linux.  To take it for a spin, try a few commands out.

Write-Host "Hello Wordl!"

This should print out a hello world message.  The Linux release is still in alpha, so there will surely be some discrepancies between Linux and Windows based systems, but the majority of cmdlets should work the same way.  For example, I noticed in my testing that the terminal was very flaky.  Reverse search (ctrl+r) and the Get-History cmdlet worked well, but arrow key scrolling through history did not.

You can even run Powershell on OSX now if you choose to.  I haven’t tried it yet, but is an option for those that are curious.  Needless to say, I am looking for the

Read More