Immutable WordPress Installations with Kubernetes

wordpress on kubernetes

In this post I will describe some of the interesting discoveries I have made for a recent side project I have been working on, which include a few nice discoveries to automate and manage WordPress deployments with Kubernetes.

Automating WordPress

I am very happy with the patterns that have emerged from this project. One thing that I have always struggled with (I’m sure others have as well) in the world of WP has been finding a good way to create completely reproducible and immutable code and configurations. For example, managing plugins and themes has been a painful experience because WP was designed back in a time before configuration as code and infrastructure as code. Due to this different paradigm, WP is usually stood up once, then managed via it

Tools have evolved since then and a perfect example of one of the bridges between the old and new way is the wp-cli. The wp-cli is basically a way to automate all kinds of things you would otherwise do in the UI. For example, wp-cli provides a way to manage plugin and themes, which as I mentioned has been notoriously difficult to do in the past.

The next step forward is the combination of a tool called bedrock and its accompanying way of modifying it and building your own Docker image. The roots/bedrock method provides the wp-cli in the build scripts so if needed, you can automate tasks using extra entrypoint scripts and/or wp-cli commands, which is just a nice extra touch and shows that the maintainers of the project are putting a lot of effort into it.

A few other bells and whistles include a way to build custom plugins into Docker images for portability rather than relying on some external persistent storage solution which can quickly add overhead and complexity to a project, as well as modern tools like PHP Composer and Packagist which provide a way to install packages (Composer) and a way to manage WP plugins via the Composer package manager (packagist).

Sidenote

There are several other ways of deploying WP into Kubernetes, unfortunately most of these methods do not address multitenancy. If multitenancy is needed, a much more complicated approach is needed involving either NFS or some other many -> many volume mapping.

Deploying with Kubernetes

The tricky part to all of this is the fact that I was unable to find any examples of others using Kubernetes to deploy bedrock managed Docker containers. There is a docker-compose.yaml file in the repo that works perfectly well, but the next step beyond that doesn’t seem to be a topic that has been covered much.

Luckily it is mostly straight forward to bring the docker-compose configuration into Kubernetes, there are just a few minor adjustments that need to be made. The below link should provide the basic scaffolding needed to bring bedrock into a Kubernetes cluster. This method will even expose a way to create and manage WP multisite, another notoriously difficult aspect of WP to manage.

https://gist.github.com/jmreicha/aae3ce024c13be1c561189946f1a0efc

There are a couple of things to note with this configuration. You will need build/maintain your own Docker image based off the roots/bedrock repo linked above. You will also need to have some knowledge of Kubernetes and a working Kubernetes cluster in place. The configuration will require certificates, and DNS so cert-manager and external-dns will most likely need to be deployed into the cluster.

Finally, in the configuration the password, domain (example.com), environment variables for configuring the database and Docker image will need to be updated to reflect your own environment. This method assumes that the WordPress database has already been split out to another location, so will require the Kubernetes cluster to be able to communicate with wherever the database is hosted.

To see some of the magic, change the number of replicas in the Kubernetes manifest configuration from 1 to 2, and you should be able to see a new, completely identical container come up with all the correct configurations and code and start taking traffic.

Conclusion

Switching to the immutable infrastructure approach with WP nets a big win. By adopting these new methods and workflows you can control everything with code, which removes the need for manually managing WP instances and instead allows you to create workflows and pipelines to do all of the heavy lifting.

These benefits include much more visibility in controlling changes, because now Git becomes the central source of truth which allows you to get a better picture of the what, when and why than any other system I have found. This new paradigm also enables the use of Continuous Integration as it is intended – the automatic builds and deploys because of Docker and Kubernetes integrations producing immutable artifacts (Docker), and deployments (Kubernetes manifests) create a clean and simple way to manage the aspects of running the WordPress site.

Read More

Exploring Linux Terminals

I have been experimenting with Linux on a recently acquired Lenovo X1 Carbon and as part of the process I have been exploring and testing out various productivity tools, including terminal emulators.

The first thing I discovered in this process is that there’s a lot of terminals out there. And they’re all slightly different, and in my experience, almost none of them were able to do all of the tweaks I like.

Here is the list of terminals I have tested out (so far).

As you can see, that is a bunch of terminals. To keep things short, in my exploration and testing, the best 3 I found were as follows.

Alacritty – 3rd

It is fast, defaults are great, works perfectly with the tiling window managers. This one would would probably have been first on the list if it was able to provide a blinking cursor but as I found with many of the options there seemed to almost always be a gotcha.

A few high points for this terminal, written in Rust, it uses the GPU to offload rendering, cross platform, and maybe my favorite feature, the configuration file is yaml based. Docs are also good and the community is really taking off so I have a strong feeling this one will continue improving.

urxvt – 2nd

The biggest problem with this terminal for me is that it is painful to configure. There are no preferences configured out of the box, which to some is preferred, but also, digging through old blog posts and perl scripts is not how I like to spend my time.

This terminal is second on the list because it was the only other terminal that was able to do all of the small little tweaks and adjustments that suit my preferences. And it is fast and light weight, which are good things.

Here is the configuration I ended up with if interested.

Kitty – 1st

This terminal was a clear winner. It was able to do all of my custom tweaks and settings and because it has nice defaults I only needed to add about 10 lines of extra configuration.

This terminal has a lot of other stuff going for it. It is written in Rust, which makes it fast, it uses the GPU to offload rendering, which also makes it fast, easy to configure, and for the most part just works. Only gotcha I found was that I needed to explicitly turn on copy to selection in my configuration, but that was easy.

Here is the configuration I ended up with if interested.

Conclusion

I plan on keeping this list updated to some extent as I find more Terminals to try out. As you can see, there are many different options and seem to be more and more all the time.

Your experience may differ so obviously take these musings with a pinch of salt, and please do look through the various options and try things out to see if they will work for you. That said, I do think I have a specific enough use case that these recommendations should be helpful in guiding most users.

Here is the repo with all of my various configurations if you want to check something out. There were a few terminal configs that didn’t end up there just because they were too minimal or I didn’t like them.

Read More

Bash Quick Substitution

big brain
Big Brain Time

One thing that I love about bash is that there is never a shortage of new tips and tricks to learn. I have been using bash for over 10 years now and just stumbled on this little trick.

This one (as the title implies) allows you to quickly substitute a string into the previous command and rerun the command with the substitution.

Quick substitution is officially part of the Bash Event Designators mechanism and is a great way to fix a typo from a previous command. Below is an example.

# Simple example to highlight substitutions
echo foo

# This will replace the string "foo" with "bar" and rerun the last command
^foo^bar

This shorthand notation is great for most use cases, with the exception of needing to replace multiple instances of a given string. Luckily that is easily addressed with the Event Designators expanded substitution syntax, shown below.

# This will substitute ALL occurrences of foo in the previous command
!!:gs/foo/bar/

# Slightly different syntax allows you to do the same thing in ZSH
^foo^bar^:G

The syntax is slightly more complicated in the first example but should be familiar enough to anyone that has used sed and/or vim substitutions, and the second example is almost identical to the shorthand substitution.

fc

Taking things one step further, we can actually edit the previous command to fix anything more than a typo of different argument. fc is actually a bash builtin function so it is available almost everywhere.

fc is especially useful for dealing with very long, complicated commands.

# Oops, we messed this up
echo fobarr | grep bar

# To fix it, just open the above in your default editor
fc

# when you write and quit the file it will put the contents into your current command

There are many great tutorials available so I would recommend looking around to see all the options and get more ideas.

Read More

Deploy AWS SSM agent to CoreOS

If you have been a CoreOS user for long you will undoubtedly have noticed that there is no real package management system.   If you’re not familiar, the philosophy of CoreOS is to avoid using a package manager and instead rely heavily on leveraging the power of Docker containers along with a few system level tools to manage servers.  The problem that I just recently stumbled across is that the AWS SSM agent is packaged into debian and RPM formats and is assumed to be installed with a package manager, which obviously won’t work on CoreOS.  In the remainder of this post I will describe the steps that I took to get the SSM agent working on a CoreOS/Dockerized server.  Overall I am very happy with how well this solution turned out.

To get started, there is a nice tutorial here for using the AWS Session Manager through the the console.  The most important thing that needs to be done before “installing” the SSM agent on the CoreOS host is to set up the AWS instance with the correct permissions for the agent to be able to communicate with AWS.  For accomplishing this, I created a new IAM role and attached the AmazonEC2RoleForSSM policy to it through the AWS console.

After this step is done, you can bring up the ssm-agent.

Install the ssm-agent

After ensuring the correct permissions have been applied to the server that is to be manager, the next step is to bring up the agent.  To do this using Docker, there are some tricks that need to be used to get things working correctly, notably, fixing the PID 1 zombie reaping problem that Docker has.

I basically lifted the Dockerfile from here originally and adapted it into my own public Docker image at jmreicha/ssm-agent:latest.  In case readers want to go try this, my image is a little bit newer than the original source and has a few tweaks.  The Dockerfile itself is mostly straight forward, the main difference is that the ssm-agent process won’t reap child processes in the default Debian image.

In order to work around the child reaping problem I substituted the slick Phusion Docker baseimage, which has a very simple process manager that allows shells spawned by the ssm-agent to be reaped when they get terminated.  I have my Dockerfile hosted here if you want to check out how the phusion baseimage version works.

Once the child reaping problem was solved, here is the command I initially used to spin up the container, which of course still didn’t work out of the box.

docker run \
  -v /var/run/dbus:/var/run/dbus \
  -v /run/systemd:/run/systemd \
 jmreicha/ssm-agent:latest

I received the following errors.

2018-11-05 17:42:27 INFO [OfflineService] Starting document processing engine...
2018-11-05 17:42:27 INFO [OfflineService] [EngineProcessor] Starting
2018-11-05 17:42:27 INFO [OfflineService] [EngineProcessor] Initial processing
2018-11-05 17:42:27 INFO [OfflineService] Starting message polling
2018-11-05 17:42:27 INFO [OfflineService] Starting send replies to MDS
2018-11-05 17:42:27 INFO [LongRunningPluginsManager] starting long running plugin manager
2018-11-05 17:42:27 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute
2018-11-05 17:42:27 INFO [HealthCheck] HealthCheck reporting agent health.
2018-11-05 17:42:27 INFO [MessageGatewayService] Starting session document processing engine...
2018-11-05 17:42:27 INFO [MessageGatewayService] [EngineProcessor] Starting
2018-11-05 17:42:27 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck
2018-11-05 17:42:27 INFO [StartupProcessor] Executing startup processor tasks
2018-11-05 17:42:27 INFO [StartupProcessor] Unable to open serial port /dev/ttyS0: open /dev/ttyS0: no such file or directory
2018-11-05 17:42:27 INFO [StartupProcessor] Attempting to use different port (PV): /dev/hvc0
2018-11-05 17:42:27 INFO [StartupProcessor] Unable to open serial port /dev/hvc0: open /dev/hvc0: no such file or directory
2018-11-05 17:42:27 ERROR [StartupProcessor] Error opening serial port: open /dev/hvc0: no such file or directory
2018-11-05 17:42:27 ERROR [StartupProcessor] Error opening serial port: open /dev/hvc0: no such file or directory. Retrying in 5 seconds...
2018-11-05 17:42:27 INFO [MessageGatewayService] Successfully created ssm-user
2018-11-05 17:42:27 ERROR [MessageGatewayService] Failed to add ssm-user to sudoers file: open /etc/sudoers.d/ssm-agent-users: no such file or directory
2018-11-05 17:42:27 INFO [MessageGatewayService] [EngineProcessor] Initial processing
2018-11-05 17:42:27 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0d33006836710e7ef, requestId: 2975fe0d-846d-4256-9d50-57932be03925
2018-11-05 17:42:27 INFO [MessageGatewayService] listening reply.
2018-11-05 17:42:27 INFO [MessageGatewayService] Opening websocket connection to: %!(EXTRA string=wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0d33006836710e7ef?role=subscribe&stream=input)
2018-11-05 17:42:27 INFO [MessageGatewayService] Successfully opened websocket connection to: %!(EXTRA string=wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0d33006836710e7ef?role=subscribe&stream=input)
2018-11-05 17:42:27 INFO [MessageGatewayService] Starting receiving message from control channel
2018-11-05 17:42:32 INFO [StartupProcessor] Unable to open serial port /dev/ttyS0: open /dev/ttyS0: no such file or directory
2018-11-05 17:42:32 INFO [StartupProcessor] Attempting to use different port (PV): /dev/hvc0
2018-11-05 17:42:32 INFO [StartupProcessor] Unable to open serial port /dev/hvc0: open /dev/hvc0: no such file or directory
2018-11-05 17:42:32 ERROR [StartupProcessor] Error opening serial port: open /dev/hvc0: no such file or directory
2018-11-05 17:42:32 ERROR [StartupProcessor] Error opening serial port: open /dev/hvc0: no such file or directory. Retrying in 5 seconds...
2018-11-05 17:42:35 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds.

The first error that jumped out in logs is the “Unable to open serial port”.  There is also an error referring to not being able to add the ssm-user to the sudoers file.

The fix for these issues is to add a Docker flag to the CoreOS serial device, “–device=/dev/ttyS0” and a volume mount to the sudoers path, “-v /etc/sudoers.d:/etc/sudoers.d”.  The full Docker run command is shown below.

docker run -d --restart unless-stopped --name ssm-agent \
  --device=/dev/ttyS0 \
  -v /var/run/dbus:/var/run/dbus \
  -v /run/systemd:/run/systemd \
  -v /etc/sudoers.d:/etc/sudoers.d \
  jmreicha/ssm-agent:latest

After fixing the errors found in the logs, and bringing up the containerized SSM agent, go ahead and create a new session in the AWS console.

ssm session

The session should come up pretty much immediately and you should be able to run commands like you normally would.

The last thing to (optionally) do is run the agent as a systemd service to take advantage of some capabilities to start it up automatically if it dies or start it if the server gets rebooted.  You can probably just get away with using the docker restart policy too if you aren’t interested in configuring a systemd service, which is what I have chosen to do for now.

You could even adapt this Docker image into a Kubernetes manifest and run it as a daemonset on each node of the cluster if desired to simplify things and add another layer of security.  I may return to the systemd unit and/or Kubernetes manifest in the future if readers are interested.

Conclusion

session history

The AWS Session manager is a fantastic tool for troubleshooting/debugging as well as auditing and security.

With SSM you can make sure to never expose specific servers to the internet directly, and you can also keep track of what kinds of commands have been run on the server.  As a bonus, the AWS console helps keeps track of all the previous sessions that were created and if you hook up to Cloudwatch and/or S3 you can see all the commands and times that they were run with nice simple links to the log files.

SSM allows you to do a lot of other cool stuff like run scripts against either a subset of servers which can be filtered by tags or against all servers that are recognized by SSM.  I’m sure there are some other features as well, I just haven’t found them yet.

Read More

Exploring Docker Manifests

As part of my recent project to build an ARM based Kubernetes cluster (more on that in a different post) I have run into quite a few cross platform compatibility issues trying to get containers working in my cluster.

After a little bit of digging, I found that support was added in version 2.2 of the Docker image specification for manifests, which all Docker images to built against different platforms, including arm and arm64.  To add to this, I just recently discovered that in newer versions of Docker, there is a manifest sub-command that you can enable as an experimental feature to allow you to interact with the image manifests.  The manifest command is great for exploring Docker images without having to pull and run and test them locally or fighting with curl to get this information about an image from a Docker registry.

Enable the manifest command in Docker

First, make sure to have a semi recent version of Docker installed, I’m using 18.03.1 in this post.

Edit your docker configuration file, usually located in ~/.docker/config.json.  The following example assumes you have authentication configured, but really the only additional configuration needed is the { “experimental”: “enabled” }.

{
  "experimental": "enabled",
    "auths": {
    "https://index.docker.io/v1/": {
      "auth": "XXX"
    }
  }
}

After adding the experimental configuration to the client you should be able to access the docker manifest commands.

docker manifest -h

To inspect a manifest just provide an image to examine.

docker manifest inspect traefik

This will spit out a bunch of information about the Docker image, including schema, platforms, digests, etc.  which can be useful for finding out which platforms different images support.

{
   "schemaVersion": 2,
   "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
   "manifests": [
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 739,
         "digest": "sha256:36df85f84cb73e6eee07767eaad2b3b4ff3f0a9dcf5e9ca222f1f700cb4abc88",
         "platform": {
            "architecture": "amd64",
            "os": "linux"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 739,
         "digest": "sha256:f98492734ef1d8f78cbcf2037c8b75be77b014496c637e2395a2eacbe91e25bb",
         "platform": {
            "architecture": "arm",
            "os": "linux",
            "variant": "v6"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 739,
         "digest": "sha256:7221080406536c12abc08b7e38e4aebd811747696a10836feb4265d8b2830bc6",
         "platform": {
            "architecture": "arm64",
            "os": "linux",
            "variant": "v8"
         }
      }
   ]
}

As you can see above image (traefik) supports arm and arm64 architectures.  This is a really handy way for determining if an image works across different platforms without having to pull an image and trying to run a command against it to see if it works.  The manifest sub command has some other useful features that allow you to create, annotate and push cross platform images but I won’t go into details here.

Manifest tool

I’d also like to quickly mention the Docker manifest-tool.  This tool is more or less superseded by the built-in Docker manifest command but still works basically the same way, allowing users to inspect, annotate, and push manifests.  The manifest-tool has a few additional features and supports several registries other than Dockerhub, and even has a utility script to see if a given registry supports the Docker v2 API and 2.2 image spec.  It is definitely still a good tool to look at if you are interested in publishing multi platform Docker images.

Downloading the manifest tool is easy as it is distributed as a Go binary.

curl -OL https://github.com/estesp/manifest-tool/releases/download/latest/manifest-tool-linux-amd64
mv manifest-tool-linux-amd64 manifest-tool
chmod +x manifest-tool

One you have the manifest-tool set up you can start usuing it, similar to the manifest inspect command.

./manifest-tool inspect traefik

This will dump out information about the image manifest if it exists.

Name:   traefik (Type: application/vnd.docker.distribution.manifest.list.v2+json)
Digest: sha256:eabb39016917bd43e738fb8bada87be076d4553b5617037922b187c0a656f4a4
 * Contains 3 manifest references:
1    Mfst Type: application/vnd.docker.distribution.manifest.v2+json
1       Digest: sha256:e65103d16ded975f0193c2357ccf1de13ebb5946894d91cf1c76ea23033d0476
1  Mfst Length: 739
1     Platform:
1           -      OS: linux
1           - OS Vers:
1           - OS Feat: []
1           -    Arch: amd64
1           - Variant:
1           - Feature:
1     # Layers: 2
         layer 1: digest = sha256:03732cc4924a93fcbcbed879c4c63aad534a63a64e9919eceddf48d7602407b5
         layer 2: digest = sha256:6023e30b264079307436d6b5d179f0626dde61945e201ef70ab81993d5e7ee15

2    Mfst Type: application/vnd.docker.distribution.manifest.v2+json
2       Digest: sha256:6cb42aa3a9df510b013db2cfc667f100fa54e728c3f78205f7d9f2b1030e30b2
2  Mfst Length: 739
2     Platform:
2           -      OS: linux
2           - OS Vers:
2           - OS Feat: []
2           -    Arch: arm
2           - Variant: v6
2           - Feature:
2     # Layers: 2
         layer 1: digest = sha256:8996ab8c9ae2c6afe7d318a3784c7ba1b1b72d4ae14cf515d4c1490aae91cab0
         layer 2: digest = sha256:ee51eed0bc1f59a26e1d8065820c03f9d7b3239520690b71fea260dfd841fba1

3    Mfst Type: application/vnd.docker.distribution.manifest.v2+json
3       Digest: sha256:e12dd92e9ae06784bd17d81bd8b391ff671c8a4f58abc8f8f662060b39140743
3  Mfst Length: 739
3     Platform:
3           -      OS: linux
3           - OS Vers:
3           - OS Feat: []
3           -    Arch: arm64
3           - Variant: v8
3           - Feature:
3     # Layers: 2
         layer 1: digest = sha256:78fe135ba97a13abc86dbe373975f0d0712d8aa6e540e09824b715a55d7e2ed3
         layer 2: digest = sha256:4c380abe0eadf15052dc9ca02792f1d35e0bd8a2cb1689c7ed60234587e482f0

Likewise, you can annotate and push image manifests using the manifest-tool.  Below is an example command for pushing multiple image architectures.

./manifest-tool --docker-cfg '~/.docker' push from-args --platforms "linux/amd64,linux/arm64" --template jmreicha/example:test --target "jmreicha/example:test"

mquery

I’d also like to touch quickly on the mquery tool.  If you’re only interested in seeing if a Docker image uses manifest as well as high level multi-platform information you can run this tool as a container.

docker run --rm mplatform/mquery traefik

Here’s what the output might look like.  Super simple but useful for quickly getting platform information.

Image: traefik
 * Manifest List: Yes
 * Supported platforms:
   - linux/amd64
   - linux/arm/v6
   - linux/arm64/v8

This can be useful if you don’t need a solution that is quite as heavy as manifest-tool or enabling the built in Docker experimental support.

You will still need to figure out how to build the image for each architecture first before pushing, but having the ability to use one image for all architectures is a really nice feature.

There is work going on in the Docker and Kubernetes communities to start leveraging the features of the 2.2 spec to create multi platform images using a single name.  This will be a great boon for helping to bring ARM adoption to the forefront and will help make the container experience on ARM much better going forward.

Read More