There have been quite a few posts recently describing how to write custom plugins, now that the mechanism for creating and working with them has been made easier in upstream Kubernetes (as of v1.12). Here are the official plugin docs if you’re interested in learning more about how it all works.
One neat thing about the new plugin architecture is that they don’t need to be written in Go to be recognized by kubectl. There is a document in the Kubernetes repo that describes how to write your own custom plugin and even a helper library for making it easier to write plugins.
Instead of just writing another tutorial about how to make your own plugin, I decided to show how easy it is to grab and experiment with existing plugins.
Installing krew
If you haven’t heard about it yet, Krew is a new tool released by the Google Container Tools team for managing Kubernetes plugins. As far as I know this is the first plugin manager offering available, and it really scratches my itch for finding a specific tool for a specific job (while also being easy to use).
Krew basically builds on top of the kubectl plugin architecture for making it easier to deal with plugins by providing a sort of framework for keeping track of things and making sure they are doing what they are supposed to.
The following kubectl-compatible plugins are available:
/home/jmreicha/.krew/bin/kubectl-krew
/home/jmreicha/.krew/bin/kubectl-rbac_lookup
...
You can manage plugins without Krew, but if you work with a lot of plugins complexity and maintenance generally start to escalate quickly if you are managing everything manually. Below I will show you how easy it is to deal with plugins instead using Krew.
There are installation instructions in the repo, but it is really easy to get going. Run the following commands in your shell and you are ready to go.
(
set -x; cd "$(mktemp -d)" &&
curl -fsSLO "https://storage.googleapis.com/krew/v0.2.1/krew.{tar.gz,yaml}" &&
tar zxvf krew.tar.gz &&
./krew-"$(uname | tr '[:upper:]' '[:lower:]')_amd64" install \
--manifest=krew.yaml --archive=krew.tar.gz
)
# Then append the following to your .zshrc or bashrc
export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
# Then source your shell to pick up the path
source ~/.zshrc # or ~/.bashrc
You can use the kubectl plugin list command to look at all of your plugins.
Test it out to make sure it works.
kubectl krew help
If everything went smoothly you should see some help information and can start working with the plugin manager. For example, if you want to check currently available plugins you can use Krew.
kubectl krew update
kubectl krew search
Or you can browse around the plugin index on GitHub. Once you find a plugin you want to try out, just install it.
kubectl krew install view-utilization
That’s it. Krew should take care of downloading the plugin and putting it in the correct path to make it usable right away.
kubectl view-utilization
Some plugins require additional tools to be installed beforehand as dependencies but should tell you which ones are required when they are installed the first time.
Installing plugin: view-secret
CAVEATS:
\
| This plugin needs the following programs:
| * jq
/
Installed plugin: view-secret
When you are done with a plugin, you can install it just as easily as it was installed.
kubectl krew uninstall view-secret
Conclusion
I must say I am a really big fan of this new model for managing and creating plugins, and I think it will encourage the community to develop even more tools so I’m looking forward to seeing what people come up with.
Likewise I think Krew is a great fit for this because it is super easy to get installed and started with, which I think is important for gaining widespread adoption in the community. If you have an idea for a Kubectl plugin please consider adding it to the krew-index. The project maintainers are super friendly and are great about feedback and getting things merged.
I have been getting more familiar with Kubernetes in the past few months and have uncovered some interesting capabilities that I had no idea existed when I started, which have come in handy in helping me solve some interesting and unique problems. I’m sure there are many more tricks I haven’t found, so please feel free to let me know of other tricks you may know of.
Semi related; if you haven’t already checked it out, I wrote a post awhile ago about some of the useful kubectl tricks I have discovered. The CLI has improved since then so I’m sure there are more and better tricks now but it is still a good starting point for new users or folks that are just looking for more ideas of how to use kubectl. Again, let me know of any other useful tricks and I will add them.
Kubernetes docs
The Kubernetes community has somewhat of a love hate relationship with the documentation, although that relationship has been getting much better over time and continues to improve. Almost all of the stuff I have discovered is scattered around the documentation, the main issue is that it a little difficult to find unless you know what you’re looking for. There is so much information packed into these docs and so many features that are tucked away that aren’t obvious to newcomers. The docs have been getting better and better but there are still a few gaps in examples and general use cases that are missing. Often the “why” of using various features is still sometimes lacking.
Another point I’d like to quickly cover is the API reference documentation. When you are looking for some feature or functionality and the main documentation site fails, this is the place to go look as it has everything that is available in Kubernetes. Unfortunately the API reference is also currently a challenge to use and is not user friendly (especially for newcomers), so if you do end up looking through the API you will have to spend some time to get familiar with things, but it is definitely worth reading through to learn about capabilities you might not otherwise find.
For now, the best advice I have for working with the docs and testing functionality is trial and error. Katacoda is an amazing resource for playing around with Kubernetes functionality, so definitely check that out if you haven’t yet.
Leader election built on Kubernetes is really neat because it buys you a quick and dirty way to do some pretty complicated tasks. Usually, implementing leader election requires extra software like ZooKeeper, etcd, Consul or some other distributed key/value store for keeping track of consensus, but it is built into Kubernetes, so you don’t have much extra work to get it working.
Leader election piggy backs off the same etcd Kubernetes uses as well as Kubernetes annotations, which give users a robust way to do distributed tasks without having to recreate the wheel for doing complicated leader elections.
Basically, you can deploy the leader-elector as a sidecar with any app you deploy. Then, any container in the pod that’s interested in who is the master can can check by visiting the http endpoint (localhost:4044 by default) and they will get back some json with the current leader.
This is a beta feature currently (as of 1.13) so is enabled now by default. This one is interesting because it allows you to share a PID between containers. Unfortunately the docs don’t really tell you why this feature is useful.
Basically, if you add shareProcessNamespace: true to your pod spec, you turn on the ability to share a PID across containers. This allows you to do things like changing a configuration in one container, sending a SIGHUP, and then reloading that configuration in another container.
For example, running a sidecars that controls configuration files or for reaping orphaned zombie processes.
Custom termination messages can be useful when debugging tricky situations.
You can actually customize pod terminations by using the terminationMessagePolicy which can control how terminations get outputted. For example, by using FallbackToLogsOnError you can tell Kubernetes to use container log output if the termination message is empty and the container exited with error.
Likewise, you can specify the terminationMessagePath spec to customize the path to a log file for specifying successes and failures when a pod terminates.
Lifecycle hooks are really useful for doing things either after a container has started (such as joining a cluster) or for running commands/code for cleanup when a container is stopped (such as leaving a cluster).
Below is a straight forward example taken from the docs that writes a message after a pod starts and sends a quit signal to nginx when the pod is destroyed.
This one is probably more known, but I still think it is useful enough to add to the list. The downward API basically allows you to grab all sorts of useful metadata information about containers, including host names and IP addresses. The downward API can also be used to retrieve information about resources for pods.
The simplest example to show off the downward API is to use it to configure a pod to use the hostname of the node as an environment variable.
This is a useful trick when you want to add a layer on top of a Docker container but don’t necessarily want to build either a custom image or update an existing image. By injecting the script as a configmap directly into the container you can augment a Docker image to do basically any extra work you need it to do.
The only caveat is that in Kubernetes, configmaps are by default not set to be executable.
In order to make your script work inside of Kubernetes you will simply need to add defaultMode: 0744 to your configmap volume spec. Then simply mount the config as volume like you normally would and then you should be able to run you script as a normal command.
This one is also pretty well known but often forgotten. Using commands a health checks is a nice way to check that things are working. For example, if you are doing complicated DNS things and want to check if DNS has updated you can use dig. Or if your app updates a file when it becomes healthy, you can run a command to check for this.
Host aliases in Kubernetes offer a simple way to easily update the /etc/hosts file of a container. This can be useful for example if a localhost name needs to be mapped to some DNS name that isn’t handled by the DNS server.
As mentioned, these are just a few gems that I have uncovered, I’m sure there are a lot of other neat tricks out there. As I get more experience using Kubernetes I will be sure to update this list. Please let me know if there are things that should be on here that I missed or don’t know about.
As I have started working more with Kubernetes lately I have found it very valuable to see what a manifest looks like before deploying it. Helm can basically be used as a quick and dirty way to see what a rendered Helm template looks like. This provides the security advantages of not running tiller in your production cluster if you choose to deploy the rendered templates locally.
Helm has been sort of a subject for contention for awhile now. Security folks REALLY don’t like running the server side component because it basically allows root access into your cluster, unless it is managed a specific way, which tends to add much more complexity to the cluster. There are plans in Helm 3 to remove the server side component as well as offering some more flexible configuration options that don’t rely on the Go templating, but that functionality not ready yet so I find rendering and deploying a nice middle ground for now.
At the same time, Helm does have some nice selling points which make it a nice option for certain situations. I’d say the main draw to Helm is that it is ridiculously easy to set up and use, which is especially nice for things like local development or testing or just trying to figure out how things work in Kubernetes. The other thing that Helm does that is difficult to do otherwise, is it manages deployments and versions and environments, although there have been a number of users that have had issues with these features.
Also check out Kustomize. If you aren’t familiar, it is basically a tool for managing per environment customizations for yaml manifests and configurations. You can get pretty far by rendering templates and overlaying kustomize on top of other configurations for managing different environments, etc.
Render a template (client side)
The first step to getting a working rendered template is to install the Helm client side component. There are installation instruction for various different platforms here.
brew install kubernetes-helm # (on OSX)
You will also need to grab some charts to test with.
git clone [email protected]:kubernetes/charts.git
cd charts/stable/metallb
helm template --namespace test --name test .
Below is an example with customized variables.
helm template --namespace test --name test --set controller.resources.limits.cpu=100m .
You can dump the rendered template to a file if you want to look at it or change anything.
helm template --namespace test --name test --set controller.resources.limits.cpu=100m . > helm-test.yaml
You can even deploy these rendered templates directly if you want to.
helm template --namespace test --name test --set controller.resources.limits.cpu=100m . | kubectl -f -
Render a template (server side)
Make sure tiller is running in the cluster first. If you haven’t set up Helm on the server side before you basically set up tiller to run in the cluster. Again, I would not recommend doing this on anything outside of a throw away or testing environment. After the helm client has been installed you can use it to spin up tiller in the cluster.
helm init
Below is a basic example using the metallb chart.
helm install --namespace test --name test stable/metallb --dry-run --debug
Again, you can use customized variables.
helm install --namespace test --name test stable/metallb --set controller.resources.limits.cpu=100m --dry-run --debug
You may notice some extra configurations at the very beginning of the output. This is basically just showing default values that get applied as well as things that have been customized by the user. It is a quick way to see what kinds of things can be changed in the Helm chart.
Conclusion
Helm offers many other commands and options so I definitely recommend playing around with it and exploring the other things it can do.
I like to use both of these methods, but for now I just prefer to run a local tiller instance in a throwaway cluster (Docker for Mac) and pull in charts from the upstream repositories without having to git clone charts if I’m just looking at how the Kubernetes manifest configuration works. You can’t really use the server side rendering though to actually deploy the manifests because it sticks a bunch of other information into the command output.
All in all the Helm templating is pretty powerful and combining it with something like kustomize should get you to around 90% of where you need to be, unless you are managing much more complex and complicated configurations. The only thing that this method doesn’t lend itself very well to is managing releases and other metadata. Otherwise it is a great way to manage configurations.
As part of my recent project to build an ARM based Kubernetes cluster (more on that in a different post) I have run into quite a few cross platform compatibility issues trying to get containers working in my cluster.
After a little bit of digging, I found that support was added in version 2.2 of the Docker image specification for manifests, which all Docker images to built against different platforms, including arm and arm64. To add to this, I just recently discovered that in newer versions of Docker, there is a manifest sub-command that you can enable as an experimental feature to allow you to interact with the image manifests. The manifest command is great for exploring Docker images without having to pull and run and test them locally or fighting with curl to get this information about an image from a Docker registry.
Enable the manifest command in Docker
First, make sure to have a semi recent version of Docker installed, I’m using 18.03.1 in this post.
Edit your docker configuration file, usually located in ~/.docker/config.json. The following example assumes you have authentication configured, but really the only additional configuration needed is the { “experimental”: “enabled” }.
After adding the experimental configuration to the client you should be able to access the docker manifest commands.
docker manifest -h
To inspect a manifest just provide an image to examine.
docker manifest inspect traefik
This will spit out a bunch of information about the Docker image, including schema, platforms, digests, etc. which can be useful for finding out which platforms different images support.
As you can see above image (traefik) supports arm and arm64 architectures. This is a really handy way for determining if an image works across different platforms without having to pull an image and trying to run a command against it to see if it works. The manifest sub command has some other useful features that allow you to create, annotate and push cross platform images but I won’t go into details here.
Manifest tool
I’d also like to quickly mention the Docker manifest-tool. This tool is more or less superseded by the built-in Docker manifest command but still works basically the same way, allowing users to inspect, annotate, and push manifests. The manifest-tool has a few additional features and supports several registries other than Dockerhub, and even has a utility script to see if a given registry supports the Docker v2 API and 2.2 image spec. It is definitely still a good tool to look at if you are interested in publishing multi platform Docker images.
Downloading the manifest tool is easy as it is distributed as a Go binary.
I’d also like to touch quickly on the mquery tool. If you’re only interested in seeing if a Docker image uses manifest as well as high level multi-platform information you can run this tool as a container.
docker run --rm mplatform/mquery traefik
Here’s what the output might look like. Super simple but useful for quickly getting platform information.
This can be useful if you don’t need a solution that is quite as heavy as manifest-tool or enabling the built in Docker experimental support.
You will still need to figure out how to build the image for each architecture first before pushing, but having the ability to use one image for all architectures is a really nice feature.
There is work going on in the Docker and Kubernetes communities to start leveraging the features of the 2.2 spec to create multi platform images using a single name. This will be a great boon for helping to bring ARM adoption to the forefront and will help make the container experience on ARM much better going forward.
Kubernetes is complicated, as you’ve probably already discovered if you’ve used Kubernetes before. Likewise, the Kubectl command line tool can pretty much do anything but can feel cumbersome, clunky and generally overwhelming for those that are new to the Kubernetes ecosystem. In this post I want to take some time to describe a few of the CLI tools that I have discovered that help ease the pain of working with and managing Kubernetes from the command line.
There are many more tools out there and the list keeps growing, so I will probably revisit this post in the future to add more cool stuff as the community continues to grow and evolve.
Where to find projects?
As a side note, there are a few places to check for tools and projects. The first is the CNCF Cloud Native Landscape. This site aims to keep track of all the various different projects in the Cloud/Kubernetes world. An entire post could be written about all of the features and and filters but at the highest level it is useful for exploring and discovering all the different evolving projects. Make sure to check out the filtering capabilities.
The other project I have found to be extremely useful for finding different projects is the awesome-kubernetes repo on Github. I found a number of tools mentioned in this post because of the awesome-kubernetes project. There is some overlap between the Cloud Native Landscape and awesome-kubernetes but they mostly compliment each other very nicely. For example, awesome-kubernetes has a lot more resources for working with Kubernetes and a lot of the smalller projects and utilities that haven’t made it into the Cloud Native Landscape. Definitely check this project out if you’re looking to explore more of the Kubernetes ecosystem.
Kubectl tricks
These are various little tidbits that I have found to help boost my productivity from the CLI.
Tab completion – The first thing you will probably want to get working when starting. There are just too many options to memorize and tab completion provides a nice way to look through all of the various commands when learning how Kubernetes works. To install (on OS X) run the following command.
brew install bash-completion
In zsh, adding the completion is as simple as running source <(kubectl completion bash). The same behavior can be accomplished in zsh using source <(kubectl completion zsh).
Aliases and shortcuts – One distinct flavor of Kubernetes is how cumbersome the CLI can be. If you use Zsh and something like oh-my-zsh, there is a default set of aliases that work pretty well, which you can find here. There are a many posts about aliases out there already so I won’t go into too much detail about them. I will say though that aliasing k to kubectl is one of the best time savers I have found so far. Just add the following snippet to your bash/zsh profile for maximum glory.
alias k=kubectl
kubectl –export – This is a nice hidden feature that basically allows users to switch Kubernetes from imperative (create) to declarative (apply). The --export flag will basically take an existing object and strip out unwanted/unneeded metadata like statuses and timestamps and present a clear version of what’s running, which can then be exported to a file and applied to the cluster. The biggest advantage of using declarative configs is the ability to mange and maintain them in git repos.
kubectl top – In newer versions, there is the top command, which gives a high level overview of CPU and memory utilization in the cluster. Utilization can be filtered at the node level as well as the pod level to give a very quick and dirty view into potential bottlenecks in the cluster. In older versions, Heapster needs to be installed for this functionaliity to work correctly, and in newer versions needs metrics-server to be running.
kubectl explain – This is a utility built in to Kubectl that basically provides a man page for what each Kubernetes resource does. It is a simple way to explore Kubernetes without leaving the terminal
This is an amazing little utility for quickly moving between Kubernetes contexts and namespaces. Once you start working with multiple different Kubernetes clusters, you notice how cumbersome it is to switch between environments and namespaces. Kubectx solves this problem by providing a quick and easy way to see what environments and namespaces a user is currently in and also quickly switch between them. I haven’t had any issues with this tool and it is quickly becoming one of my favorites.
Dealing with log output using Kubectl is a bit of a chore. Stern (and similarly kail) offer a much nicer user experience when dealing with logs. These tools allow users the ability to do things like show logs for multiple containers in pod, use regex matching to tail logs for specific containers, give nice colored output for distinguishing between logs, filter logs by namespaces and a bunch of other nice features.
Obviously for a full setup, using an aggregated/centralized logging solution with something like Fluenctd or Logstash would be more ideal, but for examining logs in a pinch, these tools do a great job and are among my favorites. As an added bonus, I don’t have to copy/paste log names any more.
yq is a nice little command line tool for parsing yaml files, which works in a similar way to the venerable jq. Parsing, reading, updating yaml can sometimes be tricky and this tool is a great and lightweight way to manipulate configurations. This tool is especially useful for things like CI/CD where a tag or version might change that is nested deep inside yaml.
There is also the lesser known jsonpath option that allows you to interact with the json version of a Kubernetes object, baked into kubectl. This feature is definitely less powerful than jq/yq but works well when you don’t want to overcomplicate things. Below you can see we can use it to quickly grab the name of an object.
kubectl get pods -o=jsonpath='{.items[0].metadata.name}'
Working with yaml and json for configuration in general seems to be an emerging pattern for almost all of the new projects. It is definitely worth learning a few tools like yq and jq to get better at parsing and manipulating data using these tools.
Similar to the above, ksonnet and jsonnet are basically templating tools for working with Kubernetes and json objects. These two tools work nicely for managing Kubernetes manifests and make a great fit for automating deployments, etc. with CI/CD.
ksonnet and jsonnet are gaining popularity because of their ease of use and simplicity compared to a tool like Helm, which also does templating but needs a system level permission pod running in the Kubernetes cluster. Jsonnet is all client side, which removes the added attack vector but still provides users with a lot of flexibility for creating and managing configs that a templating language provides.
More random Kubernetes tricks
Since 1.10, kubectl has the ability to port forward to resource name rather than just a pod. So instead of looking up pods that are running and connecting to one all the time, you can just grab the service name or deployment and just port forward to it.
port-forward TYPE/NAME [LOCAL_PORT:]REMOTE_PORT
k port-forward deployment/mydeployment 5000:6000
New in 1.11, which will be dropping soonish, there is a top level command called api-resource, which allows users to view and interact with API objects. This will be a nice troubleshooting tool to have if for example you are wanting to see what kinds of objects are in a namespace. The following command will show you these objects.
k api-resources --verbs=list --namespace -o name | xargs -n 1 kubectl get -o name -n foo
Another handy trick is the ability to grab a base64 string and decode it on the fly. This is useful when you are working with secrets and need to quickly look at what’s in the secret. You can adapt the following command to accomplish this (make sure you have jq installed).
k get secret my-secret --namespace default -o json | jq -r '.data | .["secret-field"]' | base64 --decode
Just replace .["secret-field"] to use your own field.
UPDATE: I just recently discovered a simple command line tool for decoding base64 on the fly called Kubernetes Secret Decode (ksd for short). This tool looks for base64 and renders it out for you automatically so you don’t have to worry about screwing around with jq and base64 to extract data out when you want to look at a secret.
k get secret my-secret --namespace default -o json | ksd
That command is much cleaner and easier to use. This utility is a Go app and there are binaries for it on the releases page, just download it and put it in your path and you are good to go.
Conclusion
The Kubernetes ecosystem is a vast world, and it only continues to grow and evolve. There are many more kubectl use cases and community to tools that I haven’t discovered yet. Feel free to let me know any other kubectl tricks you know of, and I will update them here.
I would love to grow this list over time as I get more acquainted with Kubernetes and its different tools.