Set up Drone on arm64 Kubernetes clusters

Continuing with the multiarch and Kubernetes narratives that I have been writing about for awhile now, in this post I will be showing off some of the capabilities of the Drone.io continuous integration tool running on my arm64 Kubernetes homelab cluster.

As arm64 continues to gain popularity, more and more projects are adding support for it, including Drone.io. With its semi recent announcement, Drone can now run on a variety of different architectures now. Likewise, the Drone maintainers have also been working towards a 1.0 release, which brings first class support for Kubernetes, among other things.

I won’t touch on too many of the specifics of Drone, but if you’re interested in learning more, you can check out the website. I will mostly be focusing on how to get things running in Kubernetes, which turns out to be exactly the same for both amd64 and arm64 architectures. There were a few things I discovered along the way to get the Kubernetes integrations working but for the most part things “just work”.

I started off by grabbing the Helm chart and modifying it to suit my needs. I find it easiest to template the chart and then save it off locally so I can play around with it.

Note: the below example assumes you already have helm installed locally.

git clone [email protected]:helm/charts.git && cd charts
helm template --name drone --namespace cicd \
   --set 'sourceControl.provider=github' \
   --set 'sourceControl.github.clientID=XXXXXXXX' \
   --set 'sourceControl.secret=drone-server-secrets' \
   --set 'server.host=drone.example.com' \
   --set 'server.kubernetes.enabled=false' \
   stable/drone > /tmp/manifest.yaml

Obviously you will want to set the configurations to match your own settings, like domain name and oauth settings (I used Github).

After saving out the manifest, the first issue I ran into is that port 9000 is still referenced in the Helm chart, which was used for communication between the client and server in the older releases, but is no longer used. So I just completely removed the references to the port in my Frankenstein configuration. If you are just using the Kubernetes configuration mentioned below, you won’t run into these problems connecting the server and agent, but if you use the agent you will.

There is some server config that will need to adjusted as well to get things working. For example, the oauth settings will need to be created on the Github side first in order for any of this to work. Also, the drone server host will need to be accessible from the internet so any firewall rules will need to be added or adjusted to allow traffic.

 env:
  # Webhook setings
  - name: DRONE_ALWAYS_AUTH
    value: "false"
  - name: DRONE_SERVER_HOST
    value: "drone.example.com"
  - name: DRONE_SERVER_PROTO
    #value: http
    value: https
  # Agent config
  - name: DRONE_RPC_SECRET
    valueFrom:
      secretKeyRef:
        name: drone
        key: secret
  # Server config
  - name: DRONE_DATABASE_DATASOURCE
    value: "/var/lib/drone/drone.sqlite"
  - name: DRONE_DATABASE_DRIVER
    value: "sqlite3"
  - name: DRONE_LOGS_DEBUG
    value: "true"
  - name: DRONE_LOGS_PRETTY
    value: "true"
  - name: DRONE_USER_CREATE
    value: "username:<github_user>,machine:false,admin:true,token:abc123"
  # Github config
  - name: DRONE_GITHUB_CLIENT_ID
    value: abcd
  - name: DRONE_GITHUB_SERVER
    value: https://github.com
  - name: DRONE_GITHUB_CLIENT_SECRET
    valueFrom:
      secretKeyRef:
        name: client-secret-drone
        key: secret

Add the DRONE_USER_CREATE env var to bootstrap an admin account when starting the Drone server. This will allow your user to do all of the admin things using the CLI tool.

The secrets so should get generated when you dump the Helm chart, so feel free to update those with any values you may need.

Note: if you have double checked all of your settings but builds aren’r being triggered, there is a good chance that the webhook is the problem. There is a really good post about how to troubleshoot these settings here.

Running Drone with the Kubernetes integration

This option turned out to be the easier approach. Just add the following configuration to the drone server deployment environment variables, updating according to your environment. For example, the namespace I am deploying to is called “cicd”, which will need to be updated if you choose a different namespace.

- name: DRONE_KUBERNETES_ENABLED
  value: "true"
- name: DRONE_KUBERNETES_NAMESPACE
  value: cicd
- name: DRONE_KUBERNETES_SERVICE_ACCOUNT
  value: drone-pipeline

The main downside to this method is that it creates Kubernetes jobs for each build. By default, once these builds are done, the will exit and not clean themselves up, so if you do a lot of builds then your namespace will get clogged up. There is a way to set TTLs on finished to clean themselves up in newer versions of Kubernetes via the TTLAfterFinished flag but this functionality isn’t default in Kubernetes yet and is a little bit out of the scope of this post.

Running Drone with the agent configuration

The agent uses the sidecar pattern to run a Docker in Docker (dind) container to connect to the Docker socket in order to allow the Drone agent to do its builds.

The main downside of using this approach is that there seems to be an issue (sometimes) where the Drone components can’t talk to the Docker socket, you can find a better description of this problem and more details here. The problem seems to be a race condition where the docker socket is not being able to be mounted before the agent comes up, but I still haven’t totally solved the problem there yet. The advice for getting around this is to run the agent on a dedicated stand alone host to avoid race conditions and other pitfalls.

That being said, if you still want to use this method you will need to add an additional deployment to the config for the drone agent. If you use the agent you can disregard the above Kubernetes environment variable configurations and instead set appropriate environment variables in the agent. Below is the working snipped I used for deploying the agent to my test cluster.

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: drone-agent
  labels:
    app: drone
    component: agent
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: drone
        component: agent
    spec:
      serviceAccountName: drone
      containers:
      - name: agent
        image: "docker.io/drone/agent:1.0.0-rc.6"
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 3000
          protocol: TCP
        env:
          # This value should point to the Drone server service
          - name: DRONE_RPC_SERVER
            value: http://drone.cicd
          - name: DRONE_RPC_SECRET
            valueFrom:
              secretKeyRef:
                name: drone
                key: secret
          - name: DOCKER_HOST
            value: tcp://localhost:2375
          - name: DRONE_LOGS_DEBUG
            value: "true"
          # Uncomment this for additional trace logs
          #- name: DRONE_LOGS_TRACE
          #  value: "true"
          - name: DRONE_LOGS_PRETTY
            value: "true"

      - name: dind
        image: "docker.io/library/docker:18.06-dind"
        imagePullPolicy: IfNotPresent
        env:
        - name: DOCKER_DRIVER
          value: overlay2

        securityContext:
          privileged: true

        volumeMounts:
          - name: docker-graph-storage
            mountPath: /var/lib/docker
      volumes:
      - name: docker-graph-storage
        emptyDir: {}

I have gotten the agent to work, I just haven’t had very much success getting it working consistently. I would avoid using this method unless you have to or as mentioned above, get a dedicated host for running builds on.

Testing it out

After wiring everything up, the rest is easy. Add a file called .drone.yml to a repository that you would like to automate builds for. You can find out more about the various capabilities here.

For my use case I wanted to tell drone to build and publish an arm64 based Docker image whenever a change to master occurs. You can look at my drone configuration here to get a better idea of the multiarch options as well as authenticating to a Docker registry.

After adding the .drone.yml to your repo and triggering a build you should see something similar in your local Drone instance.

Sample Drone build

If everything worked correctly and is green then you should be good to go. Obviously there is a lot of overhead that Kubernetes brings but the actual Drone setup if really straight forward. Just set stuff up on the Github side, translate it into Kubernetes configurations and add some other Drone specific config options and you should have a nice CI/CD environment ready to go.

Conclusion

I am very excited to see Drone grow and mature and use it more in the future. It is simple yet flexible and it fits nicely into the new paradigm of using containers for everything.

The new 1.0 YAML syntax is really nice as well, as it basically maps to the conventions that Kubernetes has chosen, so if you are familiar with that syntax you should feel at home. You can check out the available plugins here, which cover about 90% of the use cases you would see in the wild.

One downside is that YAML syntax errors are really hard to debug, and there isn’t much in the way of output to figure out where your problems are. The best approach I have found is to run the .drone.yml file through the Drone CLI lint/fmt tool before committing and building.

The Drone CLI tool is really powerful and could probably be its own post. There are some links in the references that show off some of its other features.

References

There are a few cool Drone resources I would definitely recommend checking out if you are interested running Drone in your own environment. The docs reference is a great place to start and is great for finding information about how to configure and tweak various Drone settings.

https://docs.drone.io/reference/

Here is a link to the CLI reference.

https://github.com/drone/awesome-drone

I also definitely recommend checking out the jsonnet extension docs, which can be used to help improve automation workflows. The second link show an good example of how it works and the third link shows some practical applications of using jsonnet to help manage complicated CI pipelines.

https://docs.drone.io/extend/config/jsonnet/
https://github.com/drone/drone/blob/master/.drone.jsonnet
https://medium.com/dazn-tech/simplify-your-ci-pipeline-configuration-with-jsonnet-5a96cd9ccc51

Here is a link for various cool drone stuff, including blog posts and tools.

https://docs.drone.io/cli/

Read More

Kubernets plugins

Manage Kubernetes Plugins with Krew

There have been quite a few posts recently describing how to write custom plugins, now that the mechanism for creating and working with them has been made easier in upstream Kubernetes (as of v1.12). Here are the official plugin docs if you’re interested in learning more about how it all works.

One neat thing about the new plugin architecture is that they don’t need to be written in Go to be recognized by kubectl. There is a document in the Kubernetes repo that describes how to write your own custom plugin and even a helper library for making it easier to write plugins.

Instead of just writing another tutorial about how to make your own plugin, I decided to show how easy it is to grab and experiment with existing plugins.

Installing krew

If you haven’t heard about it yet, Krew is a new tool released by the Google Container Tools team for managing Kubernetes plugins. As far as I know this is the first plugin manager offering available, and it really scratches my itch for finding a specific tool for a specific job (while also being easy to use).

Krew basically builds on top of the kubectl plugin architecture for making it easier to deal with plugins by providing a sort of framework for keeping track of things and making sure they are doing what they are supposed to.

The following kubectl-compatible plugins are available:

/home/jmreicha/.krew/bin/kubectl-krew
/home/jmreicha/.krew/bin/kubectl-rbac_lookup
...

You can manage plugins without Krew, but if you work with a lot of plugins complexity and maintenance generally start to escalate quickly if you are managing everything manually. Below I will show you how easy it is to deal with plugins instead using Krew.

There are installation instructions in the repo, but it is really easy to get going. Run the following commands in your shell and you are ready to go.

(
  set -x; cd "$(mktemp -d)" &&
  curl -fsSLO "https://storage.googleapis.com/krew/v0.2.1/krew.{tar.gz,yaml}" &&
  tar zxvf krew.tar.gz &&
  ./krew-"$(uname | tr '[:upper:]' '[:lower:]')_amd64" install \
    --manifest=krew.yaml --archive=krew.tar.gz
)

# Then append the following to your .zshrc or bashrc
export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"

# Then source your shell to pick up the path
source ~/.zshrc # or ~/.bashrc

You can use the kubectl plugin list command to look at all of your plugins.

Test it out to make sure it works.

kubectl krew help

If everything went smoothly you should see some help information and can start working with the plugin manager. For example, if you want to check currently available plugins you can use Krew.

kubectl krew update
kubectl krew search

Or you can browse around the plugin index on GitHub. Once you find a plugin you want to try out, just install it.

kubectl krew install view-utilization

That’s it. Krew should take care of downloading the plugin and putting it in the correct path to make it usable right away.

kubectl view-utilization

Some plugins require additional tools to be installed beforehand as dependencies but should tell you which ones are required when they are installed the first time.

Installing plugin: view-secret
CAVEATS:
\
 |  This plugin needs the following programs:
 |  * jq
/
Installed plugin: view-secret

When you are done with a plugin, you can install it just as easily as it was installed.

kubectl krew uninstall view-secret

Conclusion

I must say I am a really big fan of this new model for managing and creating plugins, and I think it will encourage the community to develop even more tools so I’m looking forward to seeing what people come up with.

Likewise I think Krew is a great fit for this because it is super easy to get installed and started with, which I think is important for gaining widespread adoption in the community. If you have an idea for a Kubectl plugin please consider adding it to the krew-index. The project maintainers are super friendly and are great about feedback and getting things merged.

Read More

Kubernetes Tips and Tricks

I have been getting more familiar with Kubernetes in the past few months and have uncovered some interesting capabilities that I had no idea existed when I started, which have come in handy in helping me solve some interesting and unique problems.  I’m sure there are many more tricks I haven’t found, so please feel free to let me know of other tricks you may know of.

Semi related; if you haven’t already checked it out, I wrote a post awhile ago about some of the useful kubectl tricks I have discovered.  The CLI has improved since then so I’m sure there are more and better tricks now but it is still a good starting point for new users or folks that are just looking for more ideas of how to use kubectl.  Again, let me know of any other useful tricks and I will add them.

Kubernetes docs

The Kubernetes community has somewhat of a love hate relationship with the documentation, although that relationship has been getting much better over time and continues to improve.  Almost all of the stuff I have discovered is scattered around the documentation, the main issue is that it a little difficult to find unless you know what you’re looking for.  There is so much information packed into these docs and so many features that are tucked away that aren’t obvious to newcomers.  The docs have been getting better and better but there are still a few gaps in examples and general use cases that are missing.  Often the “why” of using various features is still sometimes lacking.

Another point I’d like to quickly cover is the API reference documentation.  When you are looking for some feature or functionality and the main documentation site fails, this is the place to go look as it has everything that is available in Kubernetes.  Unfortunately the API reference is also currently a challenge to use and is not user friendly (especially for newcomers), so if you do end up looking through the API you will have to spend some time to get familiar with things, but it is definitely worth reading through to learn about capabilities you might not otherwise find.

For now, the best advice I have for working with the docs and testing functionality is trial and error.  Katacoda is an amazing resource for playing around with Kubernetes functionality, so definitely check that out if you haven’t yet.

Simple leader election

Leader election built on Kubernetes is really neat because it buys you a quick and dirty way to do some pretty complicated tasks.  Usually, implementing leader election requires extra software like ZooKeeper, etcd, consul or some other distributed key/value store for keeping track of consensus, but it is built into Kubernetes, so you don’t have much extra work to get it working.

Leader election piggy backs off the same etcd Kubernetes uses as well as Kubernetes annotations, which give users a robust way to do distributed tasks without having to recreate the wheel for doing complicated leader elections.

Basically, you can deploy the leader-elector as a sidecar with any app you deploy.  Then, any container in the pod that’s interested in who is the master can can check by visiting the http endpoint (localhost:4044 by default) and they will get back some json with the current leader.

Shared process namespace across namespaces

This is a beta feature currently (as of 1.13) so is enabled now by default.  This one is interesting because it allows you to to share basically share a PID between containers.  Unfortunately the docs don’t really tell you why this is useful.

Basically, if you add shareProcessNamespace: true to your pod spec, you turn on the ability to share a PID across containers. This allows you to do things like changing a configuration in one container, sending a SIGHUP, and then reloading that configuration in another container.

For example, running a sidecars that controls configuration files or for reaping orphaned zombie processes.

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  shareProcessNamespace: true
  containers:
  - name: nginx
    image: nginx
  - name: shell
    image: busybox
    securityContext:
      capabilities:
        add:
        - SYS_PTRACE
    stdin: true
    tty: true

Custom termination messages

Custom termination messages can be useful when debugging tricky situations.

You can actually customize pod terminations by using the terminationMessagePolicy which can control how terminations get outputted. For example, by using FallbackToLogsOnError you can tell Kubernetes to use container log output if the termination message is empty and the container exited with error.

Likewise, you can specify the terminationMessagePath spec to customize the path to a log file for specifying successes and failures when a pod terminates.

apiVersion: v1
kind: Pod
metadata:
  name: msg-path-demo
spec:
  containers:
  - name: msg-path-demo-container
    image: debian
    terminationMessagePath: "/tmp/my-log"

Container lifecycle hooks

Lifecycle hooks are really useful for doing things either after  a container has started (such as joining a cluster) or for running commands/code for cleanup when a container is stopped (such as leaving a cluster).

Below is a straight forward example taken from the docs that writes a message after a pod starts and sends a quit signal to nginx when the pod is destroyed.

apiVersion: v1
kind: Pod
metadata:
  name: lifecycle-demo
spec:
  containers:
  - name: lifecycle-demo-container
    image: nginx
    lifecycle:
      postStart:
        exec:
          command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
      preStop:
        exec:
          command: ["/usr/sbin/nginx","-s","quit"]

Kubernetes downward API

This one is probably more known, but I still think it is useful enough to add to the list.  The downward API basically allows you to grab all sorts of useful metadata information about containers, including host names and IP addresses.  The downward API can also be used to retrieve information about resources for pods.

The simplest example to show off the downward API is to use it to configure a pod to use the hostname of the node as an environment variable.

apiVersion: v1
kind: Pod
spec:
  containers:
    - name: test-container
      image: k8s.gcr.io/busybox
      command: [ "sh", "-c"]
      args:
      - while true; do
          echo -en '\n';
          printenv MY_NODE_NAME
          sleep 10;
        done;
      env:
        - name: MY_NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName

Injecting a script into a container from a configmap

This is a useful trick when you want to add a layer on top of a Docker container but don’t necessarily want to build either a custom image or update an existing image.  By injecting the script as a configmap directly into the container you can augment a Docker image to do basically any extra work you need it to do.

The only caveat is that in Kubernetes, configmaps are by default not set to be executable.

In order to make your script work inside of Kubernetes you will simply need to add defaultMode: 0744 to your configmap volume spec. Then simply mount the config as volume like you normally would and then you should be able to run you script as a normal command.

...
volumeMounts:
- name: wrapper
mountPath: /scripts
volumes:
- name: wrapper
configMap:
name: wrapper
defaultMode: 0744
...

Using commands as liveness/readiness checks

This one is also pretty well known but often forgotten.  Using commands a health checks is a nice way to check that things are working.  For example, if you are doing complicated DNS things and want to check if DNS has updated you can use dig.  Or if your app updates a file when it becomes healthy, you can run a command to check for this.

readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5

Host aliases

Host aliases in Kubernetes offer a simple way to easily update the /etc/hosts file of a container.  This can be useful for example if a localhost name needs to be mapped to some DNS name that isn’t handled by the DNS server.

apiVersion: v1
kind: Pod
metadata:
  name: hostaliases-pod
spec:
  restartPolicy: Never
  hostAliases:
  - ip: "127.0.0.1"
    hostnames:
    - "foo.local"
    - "bar.local"
  containers:
  - name: cat-hosts
    image: busybox
    command:
    - cat
    args:
    - "/etc/hosts"

Conclusion

As mentioned, these are just a few gems that I have uncovered, I’m sure there are a lot of other neat tricks out there.  As I get more experience using Kubernetes I will be sure to update this list.  Please let me know if there are things that should be on here that I missed or don’t know about.

Read More

Building k8s Manifests with Helm Templates

As I have started working more with Kubernetes lately I have found it very valuable to see what a manifest looks like before deploying it.  Helm can basically be used as a quick and dirty way to see what a rendered Helm template looks like.  This provides the security advantages of not running tiller in your production cluster if you choose to deploy the rendered templates locally.

Helm has been sort of a subject for contention for awhile now.  Security folks REALLY don’t like running the server side component because it basically allows root access into your cluster, unless it is managed a specific way, which tends to add much more complexity to the cluster.  There are plans in Helm 3 to remove the server side component as well as offering some more flexible configuration options that don’t rely on the Go templating, but that functionality not ready yet so I find rendering and deploying a nice middle ground for now.

At the same time, Helm does have some nice selling points which make it a nice option for certain situations.  I’d say the main draw to Helm is that it is ridiculously easy to set up and use, which is especially nice for things like local development or testing or just trying to figure out how things work in Kubernetes.  The other thing that Helm does that is difficult to do otherwise, is it manages deployments and versions and environments, although there have been a number of users that have had issues with these features.

Also check out Kustomize.  If you aren’t familiar, it is basically a tool for managing per environment customizations for yaml manifests and configurations.  You can get pretty far by rendering templates and overlaying kustomize on top of other configurations for managing different environments, etc.

Render a template (client side)

The first step to getting a working rendered template is to install the Helm client side component. There are installation instruction for various different platforms here.

brew install kubernetes-helm # (on OSX)

You will also need to grab some charts to test with.

git clone [email protected]:kubernetes/charts.git
cd charts/stable/metallb
helm template --namespace test --name test .

Below is an example with customized variables.

helm template --namespace test --name test --set controller.resources.limits.cpu=100m .

You can dump the rendered template to a file if you want to look at it or change anything.

helm template --namespace test --name test --set controller.resources.limits.cpu=100m . > helm-test.yaml

You can even deploy these rendered templates directly if you want to.

helm template --namespace test --name test --set controller.resources.limits.cpu=100m . | kubectl -f -

Render a template (server side)

Make sure tiller is running in the cluster first.  If you haven’t set up Helm on the server side before you basically set up tiller to run in the cluster.  Again, I would not recommend doing this on anything outside of a throw away or testing environment.  After the helm client has been installed you can use it to spin up tiller in the cluster.

helm init

Below is a basic example using the metallb chart.

helm install --namespace test --name test stable/metallb --dry-run --debug

Again, you can use customized variables.

helm install --namespace test --name test stable/metallb --set controller.resources.limits.cpu=100m --dry-run --debug

You may notice some extra configurations at the very beginning of the output.  This is basically just showing default values that get applied as well as things that have been customized by the user.  It is a quick way to see what kinds of things can be changed in the Helm chart.

Conclusion

Helm offers many other commands and options so I definitely recommend playing around with it and exploring the other things it can do.

I like to use both of these methods, but for now I just prefer to run a local tiller instance in a throwaway cluster (Docker for Mac) and pull in charts from the upstream repositories without having to git clone charts if I’m just looking at how the Kubernetes manifest configuration works.  You can’t really use the server side rendering though to actually deploy the manifests because it sticks a bunch of other information into the command output.

All in all the Helm templating is pretty powerful and combining it with something like kustomize should get you to around 90% of where you need to be, unless you are managing much more complex and complicated configurations.  The only thing that this method doesn’t lend itself very well to is managing releases and other metadata.  Otherwise it is a great way to manage configurations.

Read More

Exploring Docker Manifests

As part of my recent project to build an ARM based Kubernetes cluster (more on that in a different post) I have run into quite a few cross platform compatibility issues trying to get containers working in my cluster.

After a little bit of digging, I found that support was added in version 2.2 of the Docker image specification for manifests, which all Docker images to built against different platforms, including arm and arm64.  To add to this, I just recently discovered that in newer versions of Docker, there is a manifest sub-command that you can enable as an experimental feature to allow you to interact with the image manifests.  The manifest command is great for exploring Docker images without having to pull and run and test them locally or fighting with curl to get this information about an image from a Docker registry.

Enable the manifest command in Docker

First, make sure to have a semi recent version of Docker installed, I’m using 18.03.1 in this post.

Edit your docker configuration file, usually located in ~/.docker/config.json.  The following example assumes you have authentication configured, but really the only additional configuration needed is the { “experimental”: “enabled” }.

{
  "experimental": "enabled",
    "auths": {
    "https://index.docker.io/v1/": {
      "auth": "XXX"
    }
  }
}

After adding the experimental configuration to the client you should be able to access the docker manifest commands.

docker manifest -h

To inspect a manifest just provide an image to examine.

docker manifest inspect traefik

This will spit out a bunch of information about the Docker image, including schema, platforms, digests, etc.  which can be useful for finding out which platforms different images support.

{
   "schemaVersion": 2,
   "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
   "manifests": [
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 739,
         "digest": "sha256:36df85f84cb73e6eee07767eaad2b3b4ff3f0a9dcf5e9ca222f1f700cb4abc88",
         "platform": {
            "architecture": "amd64",
            "os": "linux"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 739,
         "digest": "sha256:f98492734ef1d8f78cbcf2037c8b75be77b014496c637e2395a2eacbe91e25bb",
         "platform": {
            "architecture": "arm",
            "os": "linux",
            "variant": "v6"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 739,
         "digest": "sha256:7221080406536c12abc08b7e38e4aebd811747696a10836feb4265d8b2830bc6",
         "platform": {
            "architecture": "arm64",
            "os": "linux",
            "variant": "v8"
         }
      }
   ]
}

As you can see above image (traefik) supports arm and arm64 architectures.  This is a really handy way for determining if an image works across different platforms without having to pull an image and trying to run a command against it to see if it works.  The manifest sub command has some other useful features that allow you to create, annotate and push cross platform images but I won’t go into details here.

Manifest tool

I’d also like to quickly mention the Docker manifest-tool.  This tool is more or less superseded by the built-in Docker manifest command but still works basically the same way, allowing users to inspect, annotate, and push manifests.  The manifest-tool has a few additional features and supports several registries other than Dockerhub, and even has a utility script to see if a given registry supports the Docker v2 API and 2.2 image spec.  It is definitely still a good tool to look at if you are interested in publishing multi platform Docker images.

Downloading the manifest tool is easy as it is distributed as a Go binary.

curl -OL https://github.com/estesp/manifest-tool/releases/download/latest/manifest-tool-linux-amd64
mv manifest-tool-linux-amd64 manifest-tool
chmod +x manifest-tool

One you have the manifest-tool set up you can start usuing it, similar to the manifest inspect command.

./manifest-tool inspect traefik

This will dump out information about the image manifest if it exists.

Name:   traefik (Type: application/vnd.docker.distribution.manifest.list.v2+json)
Digest: sha256:eabb39016917bd43e738fb8bada87be076d4553b5617037922b187c0a656f4a4
 * Contains 3 manifest references:
1    Mfst Type: application/vnd.docker.distribution.manifest.v2+json
1       Digest: sha256:e65103d16ded975f0193c2357ccf1de13ebb5946894d91cf1c76ea23033d0476
1  Mfst Length: 739
1     Platform:
1           -      OS: linux
1           - OS Vers:
1           - OS Feat: []
1           -    Arch: amd64
1           - Variant:
1           - Feature:
1     # Layers: 2
         layer 1: digest = sha256:03732cc4924a93fcbcbed879c4c63aad534a63a64e9919eceddf48d7602407b5
         layer 2: digest = sha256:6023e30b264079307436d6b5d179f0626dde61945e201ef70ab81993d5e7ee15

2    Mfst Type: application/vnd.docker.distribution.manifest.v2+json
2       Digest: sha256:6cb42aa3a9df510b013db2cfc667f100fa54e728c3f78205f7d9f2b1030e30b2
2  Mfst Length: 739
2     Platform:
2           -      OS: linux
2           - OS Vers:
2           - OS Feat: []
2           -    Arch: arm
2           - Variant: v6
2           - Feature:
2     # Layers: 2
         layer 1: digest = sha256:8996ab8c9ae2c6afe7d318a3784c7ba1b1b72d4ae14cf515d4c1490aae91cab0
         layer 2: digest = sha256:ee51eed0bc1f59a26e1d8065820c03f9d7b3239520690b71fea260dfd841fba1

3    Mfst Type: application/vnd.docker.distribution.manifest.v2+json
3       Digest: sha256:e12dd92e9ae06784bd17d81bd8b391ff671c8a4f58abc8f8f662060b39140743
3  Mfst Length: 739
3     Platform:
3           -      OS: linux
3           - OS Vers:
3           - OS Feat: []
3           -    Arch: arm64
3           - Variant: v8
3           - Feature:
3     # Layers: 2
         layer 1: digest = sha256:78fe135ba97a13abc86dbe373975f0d0712d8aa6e540e09824b715a55d7e2ed3
         layer 2: digest = sha256:4c380abe0eadf15052dc9ca02792f1d35e0bd8a2cb1689c7ed60234587e482f0

Likewise, you can annotate and push image manifests using the manifest-tool.  Below is an example command for pushing multiple image architectures.

./manifest-tool --docker-cfg '~/.docker' push from-args --platforms "linux/amd64,linux/arm64" --template jmreicha/example:test --target "jmreicha/example:test"

mquery

I’d also like to touch quickly on the mquery tool.  If you’re only interested in seeing if a Docker image uses manifest as well as high level multi-platform information you can run this tool as a container.

docker run --rm mplatform/mquery traefik

Here’s what the output might look like.  Super simple but useful for quickly getting platform information.

Image: traefik
 * Manifest List: Yes
 * Supported platforms:
   - linux/amd64
   - linux/arm/v6
   - linux/arm64/v8

This can be useful if you don’t need a solution that is quite as heavy as manifest-tool or enabling the built in Docker experimental support.

You will still need to figure out how to build the image for each architecture first before pushing, but having the ability to use one image for all architectures is a really nice feature.

There is work going on in the Docker and Kubernetes communities to start leveraging the features of the 2.2 spec to create multi platform images using a single name.  This will be a great boon for helping to bring ARM adoption to the forefront and will help make the container experience on ARM much better going forward.

Read More