Exploring Docker Manifests

As part of my recent project to build an ARM based Kubernetes cluster (more on that in a different post) I have run into quite a few cross platform compatibility issues trying to get containers working in my cluster.

After a little bit of digging, I found that support was added in version 2.2 of the Docker image specification for manifests, which all Docker images to built against different platforms, including arm and arm64.  To add to this, I just recently discovered that in newer versions of Docker, there is a manifest sub-command that you can enable as an experimental feature to allow you to interact with the image manifests.  The manifest command is great for exploring Docker images without having to pull and run and test them locally or fighting with curl to get this information about an image from a Docker registry.

Enable the manifest command in Docker

First, make sure to have a semi recent version of Docker installed, I’m using 18.03.1 in this post.

Edit your docker configuration file, usually located in ~/.docker/config.json.  The following example assumes you have authentication configured, but really the only additional configuration needed is the { “experimental”: “enabled” }.

{
  "experimental": "enabled",
    "auths": {
    "https://index.docker.io/v1/": {
      "auth": "XXX"
    }
  }
}

After adding the experimental configuration to the client you should be able to access the docker manifest commands.

docker manifest -h

To inspect a manifest just provide an image to examine.

docker manifest inspect traefik

This will spit out a bunch of information about the Docker image, including schema, platforms, digests, etc.  which can be useful for finding out which platforms different images support.

{
   "schemaVersion": 2,
   "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
   "manifests": [
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 739,
         "digest": "sha256:36df85f84cb73e6eee07767eaad2b3b4ff3f0a9dcf5e9ca222f1f700cb4abc88",
         "platform": {
            "architecture": "amd64",
            "os": "linux"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 739,
         "digest": "sha256:f98492734ef1d8f78cbcf2037c8b75be77b014496c637e2395a2eacbe91e25bb",
         "platform": {
            "architecture": "arm",
            "os": "linux",
            "variant": "v6"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 739,
         "digest": "sha256:7221080406536c12abc08b7e38e4aebd811747696a10836feb4265d8b2830bc6",
         "platform": {
            "architecture": "arm64",
            "os": "linux",
            "variant": "v8"
         }
      }
   ]
}

As you can see above image (traefik) supports arm and arm64 architectures.  This is a really handy way for determining if an image works across different platforms without having to pull an image and trying to run a command against it to see if it works.  The manifest sub command has some other useful features that allow you to create, annotate and push cross platform images but I won’t go into details here.

Manifest tool

I’d also like to quickly mention the Docker manifest-tool.  This tool is more or less superseded by the built-in Docker manifest command but still works basically the same way, allowing users to inspect, annotate, and push manifests.  The manifest-tool has a few additional features and supports several registries other than Dockerhub, and even has a utility script to see if a given registry supports the Docker v2 API and 2.2 image spec.  It is definitely still a good tool to look at if you are interested in publishing multi platform Docker images.

Downloading the manifest tool is easy as it is distributed as a Go binary.

curl -OL https://github.com/estesp/manifest-tool/releases/download/latest/manifest-tool-linux-amd64
mv manifest-tool-linux-amd64 manifest-tool
chmod +x manifest-tool

One you have the manifest-tool set up you can start usuing it, similar to the manifest inspect command.

./manifest-tool inspect traefik

This will dump out information about the image manifest if it exists.

Name:   traefik (Type: application/vnd.docker.distribution.manifest.list.v2+json)
Digest: sha256:eabb39016917bd43e738fb8bada87be076d4553b5617037922b187c0a656f4a4
 * Contains 3 manifest references:
1    Mfst Type: application/vnd.docker.distribution.manifest.v2+json
1       Digest: sha256:e65103d16ded975f0193c2357ccf1de13ebb5946894d91cf1c76ea23033d0476
1  Mfst Length: 739
1     Platform:
1           -      OS: linux
1           - OS Vers:
1           - OS Feat: []
1           -    Arch: amd64
1           - Variant:
1           - Feature:
1     # Layers: 2
         layer 1: digest = sha256:03732cc4924a93fcbcbed879c4c63aad534a63a64e9919eceddf48d7602407b5
         layer 2: digest = sha256:6023e30b264079307436d6b5d179f0626dde61945e201ef70ab81993d5e7ee15

2    Mfst Type: application/vnd.docker.distribution.manifest.v2+json
2       Digest: sha256:6cb42aa3a9df510b013db2cfc667f100fa54e728c3f78205f7d9f2b1030e30b2
2  Mfst Length: 739
2     Platform:
2           -      OS: linux
2           - OS Vers:
2           - OS Feat: []
2           -    Arch: arm
2           - Variant: v6
2           - Feature:
2     # Layers: 2
         layer 1: digest = sha256:8996ab8c9ae2c6afe7d318a3784c7ba1b1b72d4ae14cf515d4c1490aae91cab0
         layer 2: digest = sha256:ee51eed0bc1f59a26e1d8065820c03f9d7b3239520690b71fea260dfd841fba1

3    Mfst Type: application/vnd.docker.distribution.manifest.v2+json
3       Digest: sha256:e12dd92e9ae06784bd17d81bd8b391ff671c8a4f58abc8f8f662060b39140743
3  Mfst Length: 739
3     Platform:
3           -      OS: linux
3           - OS Vers:
3           - OS Feat: []
3           -    Arch: arm64
3           - Variant: v8
3           - Feature:
3     # Layers: 2
         layer 1: digest = sha256:78fe135ba97a13abc86dbe373975f0d0712d8aa6e540e09824b715a55d7e2ed3
         layer 2: digest = sha256:4c380abe0eadf15052dc9ca02792f1d35e0bd8a2cb1689c7ed60234587e482f0

Likewise, you can annotate and push image manifests using the manifest-tool.  Below is an example command for pushing multiple image architectures.

./manifest-tool --docker-cfg '~/.docker' push from-args --platforms "linux/amd64,linux/arm64" --template jmreicha/example:test --target "jmreicha/example:test"

mquery

I’d also like to touch quickly on the mquery tool.  If you’re only interested in seeing if a Docker image uses manifest as well as high level multi-platform information you can run this tool as a container.

docker run --rm mplatform/mquery traefik

Here’s what the output might look like.  Super simple but useful for quickly getting platform information.

Image: traefik
 * Manifest List: Yes
 * Supported platforms:
   - linux/amd64
   - linux/arm/v6
   - linux/arm64/v8

This can be useful if you don’t need a solution that is quite as heavy as manifest-tool or enabling the built in Docker experimental support.

You will still need to figure out how to build the image for each architecture first before pushing, but having the ability to use one image for all architectures is a really nice feature.

There is work going on in the Docker and Kubernetes communities to start leveraging the features of the 2.2 spec to create multi platform images using a single name.  This will be a great boon for helping to bring ARM adoption to the forefront and will help make the container experience on ARM much better going forward.

Read More

Python virtualenv Notes

Virtual environments are really useful for maintaining different packages and for separating different environments without getting your system messy.  In this post I will go over some of the various virtual environment tricks that I seem to always forget if I haven’t worked with Python in awhile.

This post is meant to be mostly a reference for remembering commands and syntax and other helpful notes.  I’d also like to mention that these steps were all tested on OSX, I haven’t tried on Windows so don’t know if it is any different.

Working with virtual environments

There are a few pieces in order to get to get started.  First, the default version of Python that ships with OSX is 2.7, which is slowly moving towards extinction.  Unfortunately, it isn’t exactly obvious how to replace this version of Python on OSX.

Just doing a “brew install python” won’t actually point the system at the newly installed version.  In order to get Python 3.x working correctly, you need to update the system path and place Python3 there.

export PATH="/usr/local/opt/python/libexec/bin:$PATH"

You will want to put the above line into your bashrc or zshrc (or whatever shell profile you use) to get the brew installed Python onto your system path by default.

Another thing I discovered – in Python 3 there is a built in command for creating virtual environments, which alleviates the need to install the virtualenv package.

Here is the command in Python 3 the command to create a new virtual environment.

python -m venv test

Once the environment has been created, it is very similar to virtualenv.  To use the environment, just source it.

source test/bin/activate

To deactivate the environment just use the “deactivate” command, like you would in virutalenv.

The virtualenv package

If you like the old school way of doing virtual environments you can still use the virtualenv package for managing your environments.

Once you have the correct version of Python, you will want to install the virtualenv package on your system globally in order to work with virtual environments.

sudo pip install virtualenvwrapper

You can check to see if the package was installed correctly by running virtualenv -h.  There are a number of useful virtualenv commands listed below for working with the environments.

Make a new virtual env

mkvirtualenv test-env

Work on a virtual env

workon test-env

Stop working on a virtual env

(when env is actiave) deactive

List environments

lsvirtualenv

Remove a virtual environment

rmvirtualenv test-env

Create virtualenv with specific version of python

mkvirtualenv -p $(which pypy) test2-env

Look at where environments are stored

ls ~/.virtualenvs

I’ll leave it here for now.  The commands and tricks I (re)discovered were enough to get me back to being productive with virtual environments.  If you have some other tips or tricks feel free to let me know.  I will update this post if I find anything else that is noteworthy.

Read More

Toggle the Vscode sidebar with vim bindings

There is an annoying behavior that you might run across in the Windows version of Vscode if you are using the Vsvim plugin when trying to use the default hotkey <ctrl+b> to toggle the sidebar open and closed.  As far as I can tell, this glitch doesn’t exist in the OSX version of Vscode.

In vim, the shortcut for this toggle is actually used to scroll the page buffer up one screen.  Obviously this makes behavior is the default for Vim but it also is annoying to not be able to open and close the sidebar.  The solution is a simple hotkey remap for the keyboard shortcut in Vscode.

To do this, pull open the keyboard shortcuts by either navigating to File -> Preferences -> Keyboard shortcuts

Or by using the command pallet (ctrl + shift + p) and searching for “keyboard shortcut”.

Once the keyboard shortcuts have been pulled open just do a search for “sidebar” and you should see the default key binding.

Just click in the “Toggle Side Bar Visibility” box and remap the keybinding to Ctrl+Shift+B (since that doesn’t get used by anything by default in Vim).  This menu is also really useful for just discovering all of the different keyboard shortcuts in Vscode.  I’m not really a fan of going crazy and totally customizing hotkeys in Vscode just because it makes support much harder but in this case it was easy enough and works really well with my workflow.

Read More

Kubernetes CLI Tricks

 

Kubernetes is complicated, as you’ve probably already discovered if you’ve used Kubernetes before.  Likewise, the Kubectl command line tool can pretty much do anything but can feel cumbersome, clunky and generally overwhelming for those that are new to the Kubernetes ecosystem.  In this post I want to take some time to describe a few of the CLI tools that I have discovered that help ease the pain of working with and managing Kubernetes from the command line.

There are many more tools out there and the list keeps growing, so I will probably revisit this post in the future to add more cool stuff as the community continues to grow and evolve.

Where to find projects?

As a side note, there are a few places to check for tools and projects.  The first is the CNCF Cloud Native Landscape.  This site aims to keep track of all the various different projects in the Cloud/Kubernetes world.  An entire post could be written about all of the features and and filters but at the highest level it is useful for exploring and discovering all the different evolving projects.  Make sure to check out the filtering capabilities.

The other project I have found to be extremely useful for finding different projects is the awesome-kubernetes repo on Github.  I found a number of tools mentioned in this post because of the awesome-kubernetes project.  There is some overlap between the Cloud Native Landscape and awesome-kubernetes but they mostly compliment each other very nicely.  For example, awesome-kubernetes has a lot more resources for working with Kubernetes and a lot of the smalller projects and utilities that haven’t made it into the Cloud Native Landscape.  Definitely check this project out if you’re looking to explore more of the Kubernetes ecosystem.

Kubectl tricks

These are various little tidbits that I have found to help boost my productivity from the CLI.

Tab completion – The first thing you will probably want to get working when starting.  There are just too many options to memorize and tab completion provides a nice way to look through all of the various commands when learning how Kubernetes works.  To install (on OS X) run the following command.

brew install bash-completion

In zsh, adding the completion is as simple as running source <(kubectl completion bash).  The same behavior can be accomplished in zsh using source <(kubectl completion zsh).

Aliases and shortcuts – One distinct flavor of Kubernetes is how cumbersome the CLI can be.  If you use Zsh and something like oh-my-zsh, there is a default set of aliases that work pretty well, which you can find here.  There are a many posts about aliases out there already so I won’t go into too much detail about them.  I will say though that aliasing k to kubectl is one of the best time savers I have found so far.  Just add the following snippet to your bash/zsh profile for maximum glory.

alias k=kubectl

kubectl –export – This is a nice hidden feature that basically allows users to switch Kubernetes from imperative (create) to declarative (apply).  The --export flag will basically take an existing object and strip out unwanted/unneeded metadata like statuses and timestamps and present a clear version of what’s running, which can then be exported to a file and applied to the cluster.  The biggest advantage of using declarative configs is the ability to mange and maintain them in git repos.

kubectl top – In newer versions, there is the top command, which gives a high level overview of CPU and memory utilization in the cluster.  Utilization can be filtered at the node level as well as the pod level to give a very quick and dirty view into potential bottlenecks in the cluster.  In older versions, Heapster needs to be installed for this functionaliity to work correctly, and in newer versions needs metrics-server to be running.

kubectl explain – This is a utility built in to Kubectl that basically provides a man page for what each Kubernetes resource does.  It is a simple way to explore Kubernetes without leaving the terminal

kubectx/kubens

This is an amazing little utility for quickly moving between Kubernetes contexts and namespaces.  Once you start working with multiple different Kubernetes clusters, you notice how cumbersome it is to switch between environments and namespaces.  Kubectx solves this problem by providing a quick and easy way to see what environments and namespaces a user is currently in and also quickly switch between them.  I haven’t had any issues with this tool and it is quickly becoming one of my favorites.

stern

Dealing with log output using Kubectl is a bit of a chore.  Stern (and similarly kail) offer a much nicer user experience when dealing with logs.  These tools allow users the ability to do things like show logs for multiple containers in pod,  use regex matching to tail logs for specific containers, give nice colored output for distinguishing between logs, filter logs by namespaces and a bunch of other nice features.

Obviously for a full setup, using an aggregated/centralized logging solution with something like Fluenctd or Logstash would be more ideal, but for examining logs in a pinch, these tools do a great job and are among my favorites.  As an added bonus, I don’t have to copy/paste log names any more.

yq

yq is a nice little command line tool for parsing yaml files, which works in a similar way to the venerable jq.  Parsing, reading, updating yaml can sometimes be tricky and this tool is a great and lightweight way to manipulate configurations.  This tool is especially useful for things like CI/CD where a tag or version might change that is nested deep inside yaml.

There is also the lesser known jsonpath option that allows you to interact with the json version of a Kubernetes object, baked into kubectl.  This feature is definitely less powerful than jq/yq but works well when you don’t want to overcomplicate things.  Below you can see we can use it to quickly grab the name of an object.

kubectl get pods -o=jsonpath='{.items[0].metadata.name}'

Working with yaml and json for configuration in general seems to be an emerging pattern for almost all of the new projects.  It is definitely worth learning a few tools like yq and jq to get better at parsing and manipulating data using these tools.

ksonnet / jsonnet

Similar to the above, ksonnet and jsonnet are basically templating tools for working with Kubernetes and json objects.  These two tools work nicely for managing Kubernetes manifests and make a great fit for automating deployments, etc. with CI/CD.

ksonnet and jsonnet are gaining popularity because of their ease of use and simplicity compared to a tool like Helm, which also does templating but needs a system level permission pod running in the Kubernetes cluster.  Jsonnet is all client side, which removes the added attack vector but still provides users with a lot of flexibility for creating and managing configs that a templating language provides.

More random Kubernetes tricks

Since 1.10, kubectl has the ability to port forward to resource name rather than just a pod.  So instead of looking up pods that are running and connecting to one all the time, you can just grab the service name or deployment and just port forward to it.

port-forward TYPE/NAME [LOCAL_PORT:]REMOTE_PORT
k port-forward deployment/mydeployment 5000:6000

New in 1.11, which will be dropping soonish, there is a top level command called api-resource, which allows users to view and interact with API objects.  This will be a nice troubleshooting tool to have if for example you are wanting to see what kinds of objects are in a namespace.  The following command will show you these objects.

k api-resources --verbs=list --namespace -o name | xargs -n 1 kubectl get -o name -n foo

Another handy trick is the ability to grab a base64 string and decode it on the fly.  This is useful when you are working with secrets and need to quickly look at what’s in the secret.  You can adapt the following command to accomplish this (make sure you have jq installed).

k get secret my-secret --namespace default -o json | jq -r '.data | .["secret-field"]' | base64 --decode

Just replace .["secret-field"] to use your own field.

UPDATE: I just recently discovered a simple command line tool for decoding base64 on the fly called Kubernetes Secret Decode (ksd for short).  This tool looks for base64 and renders it out for you automatically so you don’t have to worry about screwing around with jq and base64 to extract data out when you want to look at a secret.

k get secret my-secret --namespace default -o json | ksd

That command is much cleaner and easier to use.  This utility is a Go app and there are binaries for it on the releases page, just download it and put it in your path and you are good to go.

Conclusion

The Kubernetes ecosystem is a vast world, and it only continues to grow and evolve.  There are many more kubectl use cases and community to tools that I haven’t discovered yet.  Feel free to let me know any other kubectl tricks you know of, and I will update them here.

I would love to grow this list over time as I get more acquainted with Kubernetes and its different tools.

Read More

Export SNMP metrics with the Prometheus Operator

There are quite a few use cases for monitoring outside of Kubernetes, especially for previously built infrastructure and otherwise legacy systems.  Additional monitoring adds an extra layer of complexity to your monitoring setup and configuration, but fortunately Prometheus makes this extra complexity easier to manage and maintain, inside of Kubernetes.

In this post I will describe a nice clean way to monitor things that are internal to Kubernetes using Prometheus and the Prometheus Operator.  The advantage of this approach is that it allows the Operator to manage and monitor infrastructure, and it allows Kubernetes to do what it’s good at; make sure the things you want are running for you in an easy to maintain, declarative manifest.

If you are already familiar with the concepts in Kubernetes then this post should be pretty straight forward.  Otherwise, you can pretty much copy/paste most of these manifests into your cluster and you should have a good way to monitor things in your environment that are external to Kubernetes.

Below is an example of how to monitor external network devices using the Prometheus SNMP exporter.  There are many other exporters that can be used to monitor infrastructure that is external to Kubernetes but currently it is recommended to set up these configurations outside of the Prometheus Operator to basically separate monitoring concerns (which I plan on writing more about in the future).

Create the deployment and service

Here is what the deployment might look like.

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: snmp-exporter
spec:
  replicas: 1
  selector:
    matchLabels:
      app: snmp-exporter
  template:
    metadata:
    labels:
      app: snmp-exporter
  spec:
    containers:
    - image: oakman/snmp-exporter
    command: ["/bin/snmp_exporter"]
    args: ["--config.file=/etc/snmp_exporter/snmp.yml"]
    name: snmp-exporter
    ports:
    - containerPort: 9116
      name: metrics

And the accompanying service.

apiVersion: v1
kind: Service
metadata:
  labels:
    app: snmp-exporter
  name: snmp-exporter
spec:
  ports:
  - name: http-metrics
    port: 9116
    protocol: TCP
    targetPort: metrics
  selector:
    app: snmp-exporter

At this point you would have a pod in your cluster, attached to a static IP address.  To see if it worked you can check to make sure a service IP was created.  The service is basically what the Operator uses to create targets in Prometheus.

kubectl get sv

From this point you can 1) set up your own instance of Prometheus using Helm or by deploying via yml manifests or 2) set up the Prometheus Operator.

Today we will walk through option 2, although I will probably cover option 1 at some point in the future.

Setting up the Prometheus Operator

The beauty of using the Prometheus Operator is that it gives you a way to quickly add or change Prometheus specific configuration via the Kubernetes API (custom resource definition) and some custom objects provided by the operator, including AlertManager, ServiceMonitor and Prometheus objects.

The first step is to install Helm, which is a little bit outside of the scope of this post but there are lots of good guides on how to do it.  With Helm up and running you can easily install the operator and the accompanying kube-prometheus manifests which give you access to lots of extra Kubernetes metrics, alerts and dashboards.

helm repo add coreos https://s3-eu-west-1.amazonaws.com/coreos-charts/stable/
helm install --name prometheus-operator --set rbacEnable=true --namespace monitoring coreos/prometheus-operator
helm install coreos/kube-prometheus --name kube-prometheus --namespace monitoring

After a few moments you can check to see that resources were created correctly as a quick test.

kubectl get pods -n monitoring

NOTE: You may need to manually add the “prometheus” service account to the monitoring namespace after creating everything.  I ran into some issues because Helm didn’t do this automatically.  You can check this with kubectl get events.

Prometheus Operator configuration

Below are steps for creating custom objects (CRDs) that the Prometheus Operator uses to automatically generate configuration files and handle all of the other management behind the scenes.

These objects are wired up in a way that configs get reloaded and Prometheus will automatically get updated when it sees a change.  These object definitions basically convert all of the Prometheus configuration into a format that is understood by Kubernetes and converted to Prometheus configuration with the operator.

First we make a servicemonitor for monitoring the the snmp exporter.

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    k8s-app: snmp-exporter
    prometheus: kube-prometheus # tie servicemonitor to correct Prometheus
  name: snmp-exporter
spec:
  jobLabel: k8s-app
  selector:
    app: snmp-exporter
  namespaceSelector:
    matchNames:
    - monitoring

  endpoints:
  - interval: 60s
    port: http-metrics
    params:
      module:
      - if_mib # Select which SNMP module to use
      target:
      - 1.2.3.4 # Modify this to point at the SNMP target to monitor
    path: "/snmp"
    targetPort: 9116

Next, we create a custom alert and tie it our Prometheus Operator.  The alert doesn’t do anything useful, but is a good demo for showing how easy it is to add and manage alerts using the Operator.

Create an alert-example.yml configuration file, add it as a configmap to k8s and mount it in as a configuration with the ruleSelector label selector and the prometheus operator will do the rest. Below shows how to hook up a test rule into an existing Prometheus (kube-prometheus) alert manager, handled by the prometheus-operator.

kind: ConfigMap
apiVersion: v1
metadata:
 name: josh-test
 namespace: monitoring
 labels:
 role: alert-rules # Standard convention for organizing alert rules
 prometheus: kube-prometheus # tie to correct Prometheus
data:
 test.rules.yaml: |
 groups:
 - name: test.rules # Top level description in Prometeheus
 rules:
 - alert: TestAlert
 expr: vector(1)

Once you have created the rule definition via configmap just use kubectl to create it.

kubectl create -f alert-example.yml -n monitoring

Testing and troubleshooting

You will probably need to port forward the pod to get access to the IP and port in the cluster

kubectl port-forward snmp-exporter-<name> 9116

Then you should be able to visit the pod in your browser (or with curl).

localhost:9116

The exporter itself does a lot more so you will probably want to play around with it.  I plan on covering more of the details of other external exporters and more customized configurations for the SNMP exporter.

For example, if you want to do any sort of monitoring past basic interface stats, etc. you will need to generate and build your own set of MIBs to gather metrics from your infrastructure and also reconfigure your ServiceMonitor object in Kubernetes to use the correct MIBs so that the Operator updates the configuration correctly.

Conclusion

The amount of options for how to use Prometheus is one area of confusion when it comes to using Prometheus, especially for newcomers.  There are lots of ways to do things and there isn’t much direction on how to use them, which can also be viewed as a strength since it allows for so much flexibility.

In some situations it makes sense to use an external (non Operator managed Prometheus) when you need to do things like manage and tune your own configuration files.  Likewise, the Prometheus Operator is a great fit when you are mostly only concerned about managing and monitoring things inside Kubernetes and don’t need to do much external monitoring.

That said, there is some support for external monitoring using the Prometheus Operator, which I want to write about in a different post.  This support is limited to a handful of different external exporters (for the time being) so the best advice is to think about what kind of monitoring is needed and choose the best solution for your own use case.  It may turn out that both types of configurations are needed, but it may also end up being just as easy to use one method or another to manage Prometheus and its configurations.

Read More