Immutable WordPress Installations with Kubernetes

wordpress on kubernetes

In this post I will describe some of the interesting discoveries I have made for a recent side project I have been working on, which include a few nice discoveries to automate and manage WordPress deployments with Kubernetes.

Automating WordPress

I am very happy with the patterns that have emerged from this project. One thing that I have always struggled with (I’m sure others have as well) in the world of WP has been finding a good way to create completely reproducible and immutable code and configurations. For example, managing plugins and themes has been a painful experience because WP was designed back in a time before configuration as code and infrastructure as code. Due to this different paradigm, WP is usually stood up once, then managed via it

Tools have evolved since then and a perfect example of one of the bridges between the old and new way is the wp-cli. The wp-cli is basically a way to automate all kinds of things you would otherwise do in the UI. For example, wp-cli provides a way to manage plugin and themes, which as I mentioned has been notoriously difficult to do in the past.

The next step forward is the combination of a tool called bedrock and its accompanying way of modifying it and building your own Docker image. The roots/bedrock method provides the wp-cli in the build scripts so if needed, you can automate tasks using extra entrypoint scripts and/or wp-cli commands, which is just a nice extra touch and shows that the maintainers of the project are putting a lot of effort into it.

A few other bells and whistles include a way to build custom plugins into Docker images for portability rather than relying on some external persistent storage solution which can quickly add overhead and complexity to a project, as well as modern tools like PHP Composer and Packagist which provide a way to install packages (Composer) and a way to manage WP plugins via the Composer package manager (packagist).

Sidenote

There are several other ways of deploying WP into Kubernetes, unfortunately most of these methods do not address multitenancy. If multitenancy is needed, a much more complicated approach is needed involving either NFS or some other many -> many volume mapping.

Deploying with Kubernetes

The tricky part to all of this is the fact that I was unable to find any examples of others using Kubernetes to deploy bedrock managed Docker containers. There is a docker-compose.yaml file in the repo that works perfectly well, but the next step beyond that doesn’t seem to be a topic that has been covered much.

Luckily it is mostly straight forward to bring the docker-compose configuration into Kubernetes, there are just a few minor adjustments that need to be made. The below link should provide the basic scaffolding needed to bring bedrock into a Kubernetes cluster. This method will even expose a way to create and manage WP multisite, another notoriously difficult aspect of WP to manage.

https://gist.github.com/jmreicha/aae3ce024c13be1c561189946f1a0efc

There are a couple of things to note with this configuration. You will need build/maintain your own Docker image based off the roots/bedrock repo linked above. You will also need to have some knowledge of Kubernetes and a working Kubernetes cluster in place. The configuration will require certificates, and DNS so cert-manager and external-dns will most likely need to be deployed into the cluster.

Finally, in the configuration the password, domain (example.com), environment variables for configuring the database and Docker image will need to be updated to reflect your own environment. This method assumes that the WordPress database has already been split out to another location, so will require the Kubernetes cluster to be able to communicate with wherever the database is hosted.

To see some of the magic, change the number of replicas in the Kubernetes manifest configuration from 1 to 2, and you should be able to see a new, completely identical container come up with all the correct configurations and code and start taking traffic.

Conclusion

Switching to the immutable infrastructure approach with WP nets a big win. By adopting these new methods and workflows you can control everything with code, which removes the need for manually managing WP instances and instead allows you to create workflows and pipelines to do all of the heavy lifting.

These benefits include much more visibility in controlling changes, because now Git becomes the central source of truth which allows you to get a better picture of the what, when and why than any other system I have found. This new paradigm also enables the use of Continuous Integration as it is intended – the automatic builds and deploys because of Docker and Kubernetes integrations producing immutable artifacts (Docker), and deployments (Kubernetes manifests) create a clean and simple way to manage the aspects of running the WordPress site.

Read More

Untangling CLI Text Processing Tools

Update (2/7/2020): Added jc

The landscape of command line driver text manipulation and processing tools is somewhat large and confusing, with more and more tools emerging all the time. Because I am having trouble keeping them all in my head, I decided to make a little reference guide to help remember which tool to choose for the correct task at hand.

  • jq
  • jc
  • yq (both python and go versions)
  • oq
  • xq
  • hq
  • jk
  • ytt
  • configula
  • jsonnet
  • sed (for everything else)

jq

Reach for jq first when you need to do any kind of processing with JSON files.

From the website, “jq is like sed for JSON data – you can use it to slice and filter and map and transform structured data with the same ease that sedawkgrep and friends let you play with text.”

sudo apt-get install jq
# or
brew install jq

# consume json and output as unchanged json
curl 'https://api.github.com/repos/stedolan/jq/commits?per_page=5' | jq '.'

jc

This project looks very interesting and is a refreshing thing to see in the Linux world. JC is basically a way to create structured objects (ala PowerShell) as JSON output from running various Linux commands. And from the GitHub repo, ” This tool serializes the output of popular gnu linux command line tools and file types to structured JSON output. This allows piping of output to tools like jq”.

Transforming data into structured objects can massively simplify interacting with them by pairing the output with jq to interact with.

pip3 install --upgrade jc
df | jc --df -p
[
  {
    "filesystem": "devtmpfs",
    "1k_blocks": 1918816,
    "used": 0,
    "available": 1918816,
    "use_percent": 0,
    "mounted_on": "/dev"
  },
  {
    "filesystem": "tmpfs",
    "1k_blocks": 1930664,
    "used": 0,
    "available": 1930664,
    "use_percent": 0,
    "mounted_on": "/dev/shm"
  },
  ...
]

yq (Python) (Go)

This tool can be confusing because there is both a Python version and a Go version. On top of that, the Python version includes its own version of xq, which is different than the standalone xq tool.

The main differences between the Python and Go version is that the Python version can deal with both yaml and xml while the Go version is meant to be used as a command line tool to deal with only yaml.

pip install yq
cat input.yml | yq -y .foo.bar

oq

From the website, oq is “A performant, portable jq wrapper thats facilitates the consumption and output of formats other than JSON; using jq filters to transform the data”.

The claim to fame that oq has is that it is very similar to jq but works better with other data formats, including xml and yaml. For example, you can read in some xml (and others), apply some filters, and output to yaml (and others). This flexibility makes oq a good option if you need to deal with different data formats oustide of JSON.

snap install oq
# or
brew tap blacksmoke16/tap && brew install oq

# consume json and output xml
echo '{"name": "Jim"}' | oq -o xml .

xq

From the Github page, “Apply XPath expressions to XML, like jq does for JSONPath and JSON”.

The coolest use case I have found for xq so far is taking in an xml file and outputting it into a json file, which surprising I haven’t found another tool that can do this (oc authors say there are plans to do this in the future). The simplest example is to curl a page, pipe it through xq to change it to json and then pipe it again and use jq to manipulate the data.

pip install yq
curl -s https://mysite.xml | xq .

hq

Like xq (and jq) but for html parsing. This tool is handy for manipulating html in the same way you would xml or json.

pip install hq
cat /path/to/file.html | hq '`Hello, ${/html/head/title}!`'

jk

This is a newer tool, with a slightly different approach aimed at helping to automate configurations, especially for things like Kubernetes but should work with most structured data.

This tool works with json, yaml and hcl and can be used in conjunction with Javascript, making it an interesting option.

curl -Lo jk https://github.com/jkcfg/jk/releases/download/0.3.1/jk-darwin-amd64
chmod +x jk
sudo mv jk /usr/local/bin/

// alice.js
const alice = {
  name: 'Alice',
  beverage: 'Club-Mate',
  monitors: 2,
  languages: [
    'python',
    'haskell',
    'c++',
    '68k assembly', // Alice is cool like that!
  ],
};

// Instruct to write the alice object as a YAML file.
export default [
  { value: alice, file: `developers/${alice.name.toLowerCase()}.yaml` },
];

jk generate -v alice.js

ytt

This is basically a simplified templating language that only intends to deal with yaml. The approach the authors took was to create yaml templates and sanbdox/embed Python into the templating engine, allowing users to call on the power of Python inside of their templates.

The easiest way to play around with ytt if you don’t want to clone the repo is to try out the online playground.

curl -Lo ytt https://github.com/k14s/ytt/releases/download/v0.25.0/ytt-darwin-amd64
chmod +x ytt
sudo mv ytt /usr/local/bin/

https://github.com/k14s/ytt.git && cd ytt
ytt -f examples/playground/example-demo/

configula

From the GitHub page, ” Configula is a configuration generation language and processor. It’s goal is to make the programmatic definition of declarative configuration easy and intuitive”.

Similar in some ways to ytt, but instead of embedding Python into the yaml template file, you create a .py file and then render the py file into yaml using the Configula command line tool.

git clone https://github.com/brendandburns/configula
cd configula

# tiny.py
# Define a YAML object where the 'foo' field has the value of evaluating 1 + 2 (e.g. 3)
my_obj = foo: !~ 1 + 2
my_obj.render()

./configula examples/tiny.py

jsonnet

Jsonnet bills itself as a “data templating language for app and tool developers”. This tool was originally created by folks working at Google, and has been around for quite some time now. For some reason always seems to fly underneath the radar but it is super powerful.

This tool is a superset of JSON and allows you to add conditionals, loops and other functions available as part of its standard library. Jsonnet can render itself into json and yaml output.

pip install jsonnet
# or
brew install jsonnet

// example.jsonnet
{
  person1: {
    name: "Alice",
    welcome: "Hello " + self.name + "!",
  },
  person2: self.person1 { name: "Bob" },
}

jsonnet -S example.jsonnet

sed

For (pretty much) everything else, there is Sed. Sed, short for stream editor, has been around forever and is basically a Swiss army knife for manipulating text, and if you have been using *nix for any length of time you have more than likely come across this tool before. From their docs, Sed is “a stream editor is used to perform basic text transformations on an input stream”.

The odds are good that Sed will likely do what you’re looking for if you can’t use one of the aforementioned tools.

Read More

Testing for Deprecated Kubernetes APIs

Kubernetes API changes are coming up and I wanted to make a quick blog post to highlight what this means and show a few of things I have discovered to deal with the changes.

First, there have been some relevant announcements regarding the changes and deprecations recently. The first being the API Depractions in 1.16 announcement, which describes the changes to the API and some of the things to look at and do to fix problems

The next post is the Kubernetes 1.16 release announcement, which contains a section “Significant Changes to the Kubernetes API” that references the deprecation post.

Another excellent resource for learning about how Kubernetes deprecations work is the API deprecation documentation, highlighted in the deprecation post, but not widely shared.

In my opinion, the Kubernetes community really dropped the ball in terms of communicating these changes and missed an opportunity to describe and discuss the problems that these changes will create. I understand that the community is gigantic and it would be impossible to cover every case, but to me, the few blog posts describing the changes and not much other official communication or guides for how to handle and fix the impending problems is a little bit underwhelming.

The average user probably doesn’t pay attention to these blog posts, and there are a lot of old Helm charts out in the wild still, so I’m confident that the incoming changes will create headaches and table flips when people start upgrading. As an example, if you have an old API defined and running in a pre 1.16 cluster, and upgrade without fixing the API version first, APPS IN YOUR CLUSTER WILL BREAK. The good news is that new clusters won’t allow the old API versions, making errors easier to see and deal with.

Testing for and fixing deprecated APIs

With that mini rant out of the way, there is a simple but effective way to test your existing configurations for API compatibility.

Conftest is a nice little tool that helps write tests against structured configuration data, using the Rego language using Open Policy Agent (OPA). Conftest works with many file types including JSON, TOML and HCL, which makes it a great choice for testing a variety of different configurations, but is especially useful for testing Kubernetes YAML configurations.

To get started, install conftest.

wget https://github.com/instrumenta/conftest/releases/download/v0.15.0/conftest_0.15.0_Linux_x86_64.tar.gz
tar xzf conftest_0.15.0_Linux_x86_64.tar.gz
sudo mv conftest /usr/local/bin

Then we can use the handy policy provided by the deprek8 repo to validate the API versions.

curl https://raw.githubusercontent.com/naquada/deprek8/master/policy/deprek8.rego > deprek8.rego
conftest test -p deprek8.rego sample/manifest.yaml

Here’s what a FAIL condition might look like according to what is defined in the rego policy file for an outdated API version.

FAIL - sample/manifest.yaml - Deployment/my-deployment: API extensions/v1beta1 for Deployment is no longer served by default, use apps/v1 instead.

The Rego policy is what actually defines the behavior that Conftest will display and as you can see, it found an issue with the Deployment object defined in the test manifest.

Below is the Rego policy that causes Conftest to spit out the FAILure message. The syntax is clean and easy to follow, so writing and adjusting policies is easy.

_deny = msg {
  resources := ["DaemonSet", "Deployment", "ReplicaSet"]
  input.apiVersion == "extensions/v1beta1"
  input.kind == resources[_]
  msg := sprintf("%s/%s: API extensions/v1beta1 for %s is no longer served by default, use apps/v1 instead.", [input.kind, input.metadata.name, input.kind])
}

Once you know what is wrong with the configuration, you can use the kubectl convert subcommand to fix up the existing deprecated API objects. Again, attempting to create objects using deprecated APIs in 1.16 will be rejected automatically by Kubernetes, so you will only need to deal with converting existing objects in old clusters being upgraded.

From the above error, we know the object type (Deployment) and the version (extensions/v1beta1). With this information we can run the convert command to fix the object.

# General syntax
kubectl convert -f <file> --output-version <group>/<version>

# The --output-version flag allows specifying the API version to upgrade to 
kubectl convert -f sample/manifest.yaml  --output-version apps/v1

# Omitting the --output-version flag will convert to the latest version
kubectl convert -f sample/manifest.yaml

After the existing objects have been converted and any manifest files have been updated you should be safe to upgrade Kubernetes.

Bonus

There was a fantastic episode of TGIK awhile back called Kubernetes API Removal and You that describes in great detail what all of the deprections mean and how to fix them – definitely worth a watch if you have the time.

Conclusion

OPA and testing configurations using tools like conftest and Rego policies is a great way to harden and help standardize configurations. Taken a step further, these configuration testing tools can be extended to test all sorts of other things.

Conftest looks especially promising because of the number of file types that it understands. There is a lot of potential here for doing things like unit testing Kubernetes configuration files and other things like Terraform configs.

I haven’t written any Rego policies yet but the language looks pretty straight forward and easy to deal with. I think that as configurations continue to evolve, tools like Conftest (OPA), Kubeval and Kustomize will gain more traction and help simplify some of the complexities of Kubernetes.

Read More

Build a Pine64 Kubernetes Cluster with k3os

Kubernetes (k3os) arm64 cluster with custom 3D printed case

The k3os project was recently announced and I finally got a chance to test it out. k3os greatly simplifies the steps needed to create a Kubernetes cluster along with its counterpart, k3s, to reduce the overhead of running Kubernetes clusters. Paired with Rancher for the UI, all of these components make for an even better option. You can even run Rancher in your (arm64) k3os cluster via the Rancher Helm chart now.

Instead of using Etcd, k3s opts to use SQLite by default and does some other magic to reduce extra Kubernetes bloat and simplify management. Check here for more about k3s and how it works and how to run it.

k3os replaces some complicated OS components with much simpler ones. For example, instead of using Systemd it uses OpenRC, instead of Docker it uses containerd, it also leverages connman for configuring network components and it doesn’t use a package manager.

The method I am showing in this post uses the k3os overlay installation, which is detailed here. The reason for this choice is because the pine64 boards use u-boot to boot the OS and so special steps are needed to accommodate for the way this process is handled. The upside of this method is that these instructions should pretty much work for any of the Pi form factor boards, including the newly released Raspberry Pi4, with minimal changes.

Setup

If you haven’t downloaded and imaged your Pine64 yet, I like to use the ayufan images, which can be found here. You can easily write these images to a microSD card on OSX using something like Etcher.

Assuming the node is connected to your network, you can SSH into it.

ssh rock64@rock64 # or use the ip, password is rock64

When using the overlay installation, the first step is to download the k3os rootfs and lay it down on the host. This step applies to all nodes in the cluster.

curl -sL https://github.com/rancher/k3os/releases/download/v0.2.1/k3os-rootfs-arm64.tar.gz | tar --strip-components=1 -zxvf - -C 

The above command is installing v0.2.1 which is the most current version as of writing this, so make sure to check if there is a newer version available.

After installing, lay down the following configurations into /k3os/system/config.yaml, modifying as needed. After the machine is rebooted this path will become read only so if you need to change the configuration again you will need to edit /etc/fstab to change the location to be writable again.

Server node

ssh_authorized_keys:
- ssh-rsa <your-public-ssh-key-to-login>
hostname: k3s-master

k3os:
  data_sources:
  - cdrom
  dns_nameservers:
  - 192.168.1.1 # update this to your local or public DNS server if needed
  ntp_servers:
  - 0.us.pool.ntp.org
  - 1.us.pool.ntp.org
  password: rancher
  token: <TOKEN>

The k3s config will be written to /etc/rancher/k3s/k3s.yaml on this node so make sure to grab it if you want to connect the cluster from outside this node. Reboot the machine to boot to the new filesystem and you should be greeted with the k3os splash screen.

Agent node

The agent uses nearly the same config, with the addition of the server_url. Just point the agent nodes to the server/master and you should be good to go. Again, reboot after creating the config and the host should boot to the new filesystem and everything should be ready.

ssh_authorized_keys:
- ssh-rsa <your-public-ssh-key-to-login>
hostname: k3s-node-1

k3os:
  data_sources:
  - cdrom
  dns_nameservers:
  - 192.168.1.1 # update this to your local or public DNS server if needed
  ntp_servers:
  - 0.us.pool.ntp.org
  - 1.us.pool.ntp.org
  password: rancher
  server_url: https://<server-ip-or-hostname>:6443
  token: <TOKEN>

You can do a lot more with the bootstrap configurations, such as setting labels or environment variables. Some folks in the community have had luck getting the wifi configuration working on the RPi4’s out of the box, but I haven’t been able to get it to work yet on my Pine64 cluster. Check the docs for more details on the various configuration options.

After the nodes have been rebooted and configs applied, the cluster “should just work”. You can check that the cluster is up using k3s using the kubectl passthrough command (checking from the master node below).

k3s-master [~]$ k3s kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k3s-master   Ready    <none>   6d1h    v1.14.1-k3s.4
k3s-node-1   Ready    <none>   7m10s   v1.14.1-k3s.4

NOTE: After installing the overlay filesystem there will be no package manager and no obvious way to upgrade the kernel so use this guide only for testing purposes. The project is still very young and a number of things still need to be added, including update mechanisms and HA. Be sure to follow the k3os issue tracker and Rancher Slack (#k3os channel) for updates and developments.

Conclusion

This is easily the best method I have found so far for getting a Kubernetes cluster up and running, minus the few caveats mentioned above, which I believe will be resolved very soon. I have been very impressed with how simple and easy it has been to configure and use. The next step for me is to figure out how to run Rancher and start working on some configurations for running workloads on the cluster. I will share more on that project in another post.

There are definitely some quirks to getting this setup working for the Pi and Pine64 based boards, but aren’t major problems by any means.

References

This post was heavily inspired by this gist for getting the overlay installation method working on Raspberry Pi.

Read More

Set up Drone on arm64 Kubernetes clusters

Continuing with the multiarch and Kubernetes narratives that I have been writing about for awhile now, in this post I will be showing off some of the capabilities of the Drone.io continuous integration tool running on my arm64 Kubernetes homelab cluster.

As arm64 continues to gain popularity, more and more projects are adding support for it, including Drone.io. With its semi recent announcement, Drone can now run on a variety of different architectures now. Likewise, the Drone maintainers have also been working towards a 1.0 release, which brings first class support for Kubernetes, among other things.

I won’t touch on too many of the specifics of Drone, but if you’re interested in learning more, you can check out the website. I will mostly be focusing on how to get things running in Kubernetes, which turns out to be exactly the same for both amd64 and arm64 architectures. There were a few things I discovered along the way to get the Kubernetes integrations working but for the most part things “just work”.

I started off by grabbing the Helm chart and modifying it to suit my needs. I find it easiest to template the chart and then save it off locally so I can play around with it.

Note: the below example assumes you already have helm installed locally.

git clone [email protected]:helm/charts.git && cd charts
helm template --name drone --namespace cicd \
   --set 'sourceControl.provider=github' \
   --set 'sourceControl.github.clientID=XXXXXXXX' \
   --set 'sourceControl.secret=drone-server-secrets' \
   --set 'server.host=drone.example.com' \
   --set 'server.kubernetes.enabled=false' \
   stable/drone > /tmp/manifest.yaml

Obviously you will want to set the configurations to match your own settings, like domain name and oauth settings (I used Github).

After saving out the manifest, the first issue I ran into is that port 9000 is still referenced in the Helm chart, which was used for communication between the client and server in the older releases, but is no longer used. So I just completely removed the references to the port in my Frankenstein configuration. If you are just using the Kubernetes configuration mentioned below, you won’t run into these problems connecting the server and agent, but if you use the agent you will.

There is some server config that will need to adjusted as well to get things working. For example, the oauth settings will need to be created on the Github side first in order for any of this to work. Also, the drone server host will need to be accessible from the internet so any firewall rules will need to be added or adjusted to allow traffic.

 env:
  # Webhook setings
  - name: DRONE_ALWAYS_AUTH
    value: "false"
  - name: DRONE_SERVER_HOST
    value: "drone.example.com"
  - name: DRONE_SERVER_PROTO
    #value: http
    value: https
  # Agent config
  - name: DRONE_RPC_SECRET
    valueFrom:
      secretKeyRef:
        name: drone
        key: secret
  # Server config
  - name: DRONE_DATABASE_DATASOURCE
    value: "/var/lib/drone/drone.sqlite"
  - name: DRONE_DATABASE_DRIVER
    value: "sqlite3"
  - name: DRONE_LOGS_DEBUG
    value: "true"
  - name: DRONE_LOGS_PRETTY
    value: "true"
  - name: DRONE_USER_CREATE
    value: "username:<github_user>,machine:false,admin:true,token:abc123"
  # Github config
  - name: DRONE_GITHUB_CLIENT_ID
    value: abcd
  - name: DRONE_GITHUB_SERVER
    value: https://github.com
  - name: DRONE_GITHUB_CLIENT_SECRET
    valueFrom:
      secretKeyRef:
        name: client-secret-drone
        key: secret

Add the DRONE_USER_CREATE env var to bootstrap an admin account when starting the Drone server. This will allow your user to do all of the admin things using the CLI tool.

The secrets so should get generated when you dump the Helm chart, so feel free to update those with any values you may need.

Note: if you have double checked all of your settings but builds aren’r being triggered, there is a good chance that the webhook is the problem. There is a really good post about how to troubleshoot these settings here.

Running Drone with the Kubernetes integration

This option turned out to be the easier approach. Just add the following configuration to the drone server deployment environment variables, updating according to your environment. For example, the namespace I am deploying to is called “cicd”, which will need to be updated if you choose a different namespace.

- name: DRONE_KUBERNETES_ENABLED
  value: "true"
- name: DRONE_KUBERNETES_NAMESPACE
  value: cicd
- name: DRONE_KUBERNETES_SERVICE_ACCOUNT
  value: drone-pipeline

The main downside to this method is that it creates Kubernetes jobs for each build. By default, once these builds are done, the will exit and not clean themselves up, so if you do a lot of builds then your namespace will get clogged up. There is a way to set TTLs on finished to clean themselves up in newer versions of Kubernetes via the TTLAfterFinished flag but this functionality isn’t default in Kubernetes yet and is a little bit out of the scope of this post.

Running Drone with the agent configuration

The agent uses the sidecar pattern to run a Docker in Docker (dind) container to connect to the Docker socket in order to allow the Drone agent to do its builds.

The main downside of using this approach is that there seems to be an issue (sometimes) where the Drone components can’t talk to the Docker socket, you can find a better description of this problem and more details here. The problem seems to be a race condition where the docker socket is not being able to be mounted before the agent comes up, but I still haven’t totally solved the problem there yet. The advice for getting around this is to run the agent on a dedicated stand alone host to avoid race conditions and other pitfalls.

That being said, if you still want to use this method you will need to add an additional deployment to the config for the drone agent. If you use the agent you can disregard the above Kubernetes environment variable configurations and instead set appropriate environment variables in the agent. Below is the working snipped I used for deploying the agent to my test cluster.

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: drone-agent
  labels:
    app: drone
    component: agent
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: drone
        component: agent
    spec:
      serviceAccountName: drone
      containers:
      - name: agent
        image: "docker.io/drone/agent:1.0.0-rc.6"
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 3000
          protocol: TCP
        env:
          # This value should point to the Drone server service
          - name: DRONE_RPC_SERVER
            value: http://drone.cicd
          - name: DRONE_RPC_SECRET
            valueFrom:
              secretKeyRef:
                name: drone
                key: secret
          - name: DOCKER_HOST
            value: tcp://localhost:2375
          - name: DRONE_LOGS_DEBUG
            value: "true"
          # Uncomment this for additional trace logs
          #- name: DRONE_LOGS_TRACE
          #  value: "true"
          - name: DRONE_LOGS_PRETTY
            value: "true"

      - name: dind
        image: "docker.io/library/docker:18.06-dind"
        imagePullPolicy: IfNotPresent
        env:
        - name: DOCKER_DRIVER
          value: overlay2

        securityContext:
          privileged: true

        volumeMounts:
          - name: docker-graph-storage
            mountPath: /var/lib/docker
      volumes:
      - name: docker-graph-storage
        emptyDir: {}

I have gotten the agent to work, I just haven’t had very much success getting it working consistently. I would avoid using this method unless you have to or as mentioned above, get a dedicated host for running builds on.

Testing it out

After wiring everything up, the rest is easy. Add a file called .drone.yml to a repository that you would like to automate builds for. You can find out more about the various capabilities here.

For my use case I wanted to tell drone to build and publish an arm64 based Docker image whenever a change to master occurs. You can look at my drone configuration here to get a better idea of the multiarch options as well as authenticating to a Docker registry.

After adding the .drone.yml to your repo and triggering a build you should see something similar in your local Drone instance.

Sample Drone build

If everything worked correctly and is green then you should be good to go. Obviously there is a lot of overhead that Kubernetes brings but the actual Drone setup if really straight forward. Just set stuff up on the Github side, translate it into Kubernetes configurations and add some other Drone specific config options and you should have a nice CI/CD environment ready to go.

Conclusion

I am very excited to see Drone grow and mature and use it more in the future. It is simple yet flexible and it fits nicely into the new paradigm of using containers for everything.

The new 1.0 YAML syntax is really nice as well, as it basically maps to the conventions that Kubernetes has chosen, so if you are familiar with that syntax you should feel at home. You can check out the available plugins here, which cover about 90% of the use cases you would see in the wild.

One downside is that YAML syntax errors are really hard to debug, and there isn’t much in the way of output to figure out where your problems are. The best approach I have found is to run the .drone.yml file through the Drone CLI lint/fmt tool before committing and building.

The Drone CLI tool is really powerful and could probably be its own post. There are some links in the references that show off some of its other features.

References

There are a few cool Drone resources I would definitely recommend checking out if you are interested running Drone in your own environment. The docs reference is a great place to start and is great for finding information about how to configure and tweak various Drone settings.

https://docs.drone.io/reference/

Here is a link to the CLI reference.

https://github.com/drone/awesome-drone

I also definitely recommend checking out the jsonnet extension docs, which can be used to help improve automation workflows. The second link show an good example of how it works and the third link shows some practical applications of using jsonnet to help manage complicated CI pipelines.

https://docs.drone.io/extend/config/jsonnet/
https://github.com/drone/drone/blob/master/.drone.jsonnet
https://medium.com/dazn-tech/simplify-your-ci-pipeline-configuration-with-jsonnet-5a96cd9ccc51

Here is a link for various cool drone stuff, including blog posts and tools.

https://docs.drone.io/cli/

Read More