Set up Drone on arm64 Kubernetes clusters

Continuing with the multiarch and Kubernetes narratives that I have been writing about for awhile now, in this post I will be showing off some of the capabilities of the Drone.io continuous integration tool running on my arm64 Kubernetes homelab cluster.

As arm64 continues to gain popularity, more and more projects are adding support for it, including Drone.io. With its semi recent announcement, Drone can now run on a variety of different architectures now. Likewise, the Drone maintainers have also been working towards a 1.0 release, which brings first class support for Kubernetes, among other things.

I won’t touch on too many of the specifics of Drone, but if you’re interested in learning more, you can check out the website. I will mostly be focusing on how to get things running in Kubernetes, which turns out to be exactly the same for both amd64 and arm64 architectures. There were a few things I discovered along the way to get the Kubernetes integrations working but for the most part things “just work”.

I started off by grabbing the Helm chart and modifying it to suit my needs. I find it easiest to template the chart and then save it off locally so I can play around with it.

Note: the below example assumes you already have helm installed locally.

git clone [email protected]:helm/charts.git && cd charts
helm template --name drone --namespace cicd \
   --set 'sourceControl.provider=github' \
   --set 'sourceControl.github.clientID=XXXXXXXX' \
   --set 'sourceControl.secret=drone-server-secrets' \
   --set 'server.host=drone.example.com' \
   --set 'server.kubernetes.enabled=false' \
   stable/drone > /tmp/manifest.yaml

Obviously you will want to set the configurations to match your own settings, like domain name and oauth settings (I used Github).

After saving out the manifest, the first issue I ran into is that port 9000 is still referenced in the Helm chart, which was used for communication between the client and server in the older releases, but is no longer used. So I just completely removed the references to the port in my Frankenstein configuration. If you are just using the Kubernetes configuration mentioned below, you won’t run into these problems connecting the server and agent, but if you use the agent you will.

There is some server config that will need to adjusted as well to get things working. For example, the oauth settings will need to be created on the Github side first in order for any of this to work. Also, the drone server host will need to be accessible from the internet so any firewall rules will need to be added or adjusted to allow traffic.

 env:
  # Webhook setings
  - name: DRONE_ALWAYS_AUTH
    value: "false"
  - name: DRONE_SERVER_HOST
    value: "drone.example.com"
  - name: DRONE_SERVER_PROTO
    #value: http
    value: https
  # Agent config
  - name: DRONE_RPC_SECRET
    valueFrom:
      secretKeyRef:
        name: drone
        key: secret
  # Server config
  - name: DRONE_DATABASE_DATASOURCE
    value: "/var/lib/drone/drone.sqlite"
  - name: DRONE_DATABASE_DRIVER
    value: "sqlite3"
  - name: DRONE_LOGS_DEBUG
    value: "true"
  - name: DRONE_LOGS_PRETTY
    value: "true"
  - name: DRONE_USER_CREATE
    value: "username:<github_user>,machine:false,admin:true,token:abc123"
  # Github config
  - name: DRONE_GITHUB_CLIENT_ID
    value: abcd
  - name: DRONE_GITHUB_SERVER
    value: https://github.com
  - name: DRONE_GITHUB_CLIENT_SECRET
    valueFrom:
      secretKeyRef:
        name: client-secret-drone
        key: secret

Add the DRONE_USER_CREATE env var to bootstrap an admin account when starting the Drone server. This will allow your user to do all of the admin things using the CLI tool.

The secrets so should get generated when you dump the Helm chart, so feel free to update those with any values you may need.

Note: if you have double checked all of your settings but builds aren’r being triggered, there is a good chance that the webhook is the problem. There is a really good post about how to troubleshoot these settings here.

Running Drone with the Kubernetes integration

This option turned out to be the easier approach. Just add the following configuration to the drone server deployment environment variables, updating according to your environment. For example, the namespace I am deploying to is called “cicd”, which will need to be updated if you choose a different namespace.

- name: DRONE_KUBERNETES_ENABLED
  value: "true"
- name: DRONE_KUBERNETES_NAMESPACE
  value: cicd
- name: DRONE_KUBERNETES_SERVICE_ACCOUNT
  value: drone-pipeline

The main downside to this method is that it creates Kubernetes jobs for each build. By default, once these builds are done, the will exit and not clean themselves up, so if you do a lot of builds then your namespace will get clogged up. There is a way to set TTLs on finished to clean themselves up in newer versions of Kubernetes via the TTLAfterFinished flag but this functionality isn’t default in Kubernetes yet and is a little bit out of the scope of this post.

Running Drone with the agent configuration

The agent uses the sidecar pattern to run a Docker in Docker (dind) container to connect to the Docker socket in order to allow the Drone agent to do its builds.

The main downside of using this approach is that there seems to be an issue (sometimes) where the Drone components can’t talk to the Docker socket, you can find a better description of this problem and more details here. The problem seems to be a race condition where the docker socket is not being able to be mounted before the agent comes up, but I still haven’t totally solved the problem there yet. The advice for getting around this is to run the agent on a dedicated stand alone host to avoid race conditions and other pitfalls.

That being said, if you still want to use this method you will need to add an additional deployment to the config for the drone agent. If you use the agent you can disregard the above Kubernetes environment variable configurations and instead set appropriate environment variables in the agent. Below is the working snipped I used for deploying the agent to my test cluster.

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: drone-agent
  labels:
    app: drone
    component: agent
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: drone
        component: agent
    spec:
      serviceAccountName: drone
      containers:
      - name: agent
        image: "docker.io/drone/agent:1.0.0-rc.6"
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 3000
          protocol: TCP
        env:
          # This value should point to the Drone server service
          - name: DRONE_RPC_SERVER
            value: http://drone.cicd
          - name: DRONE_RPC_SECRET
            valueFrom:
              secretKeyRef:
                name: drone
                key: secret
          - name: DOCKER_HOST
            value: tcp://localhost:2375
          - name: DRONE_LOGS_DEBUG
            value: "true"
          # Uncomment this for additional trace logs
          #- name: DRONE_LOGS_TRACE
          #  value: "true"
          - name: DRONE_LOGS_PRETTY
            value: "true"

      - name: dind
        image: "docker.io/library/docker:18.06-dind"
        imagePullPolicy: IfNotPresent
        env:
        - name: DOCKER_DRIVER
          value: overlay2

        securityContext:
          privileged: true

        volumeMounts:
          - name: docker-graph-storage
            mountPath: /var/lib/docker
      volumes:
      - name: docker-graph-storage
        emptyDir: {}

I have gotten the agent to work, I just haven’t had very much success getting it working consistently. I would avoid using this method unless you have to or as mentioned above, get a dedicated host for running builds on.

Testing it out

After wiring everything up, the rest is easy. Add a file called .drone.yml to a repository that you would like to automate builds for. You can find out more about the various capabilities here.

For my use case I wanted to tell drone to build and publish an arm64 based Docker image whenever a change to master occurs. You can look at my drone configuration here to get a better idea of the multiarch options as well as authenticating to a Docker registry.

After adding the .drone.yml to your repo and triggering a build you should see something similar in your local Drone instance.

Sample Drone build

If everything worked correctly and is green then you should be good to go. Obviously there is a lot of overhead that Kubernetes brings but the actual Drone setup if really straight forward. Just set stuff up on the Github side, translate it into Kubernetes configurations and add some other Drone specific config options and you should have a nice CI/CD environment ready to go.

Conclusion

I am very excited to see Drone grow and mature and use it more in the future. It is simple yet flexible and it fits nicely into the new paradigm of using containers for everything.

The new 1.0 YAML syntax is really nice as well, as it basically maps to the conventions that Kubernetes has chosen, so if you are familiar with that syntax you should feel at home. You can check out the available plugins here, which cover about 90% of the use cases you would see in the wild.

One downside is that YAML syntax errors are really hard to debug, and there isn’t much in the way of output to figure out where your problems are. The best approach I have found is to run the .drone.yml file through the Drone CLI lint/fmt tool before committing and building.

The Drone CLI tool is really powerful and could probably be its own post. There are some links in the references that show off some of its other features.

References

There are a few cool Drone resources I would definitely recommend checking out if you are interested running Drone in your own environment. The docs reference is a great place to start and is great for finding information about how to configure and tweak various Drone settings.

https://docs.drone.io/reference/

Here is a link to the CLI reference.

https://github.com/drone/awesome-drone

I also definitely recommend checking out the jsonnet extension docs, which can be used to help improve automation workflows. The second link show an good example of how it works and the third link shows some practical applications of using jsonnet to help manage complicated CI pipelines.

https://docs.drone.io/extend/config/jsonnet/
https://github.com/drone/drone/blob/master/.drone.jsonnet
https://medium.com/dazn-tech/simplify-your-ci-pipeline-configuration-with-jsonnet-5a96cd9ccc51

Here is a link for various cool drone stuff, including blog posts and tools.

https://docs.drone.io/cli/

Read More

Multiarch Docker builds using Shippable

Recently I have been experimenting with different ways of building multi architecture Docker images.  As part of this process I wrote about Docker image manifests and the different ways you can package multi architecture builds into a single Docker image.  Packaging the images is only half the problem though.  You basically need to create the different Docker images for the different architectures first, before you are able to package them into manifests.

There are several ways to go about building the Docker images for various architectures.  In the remainder of this post I will be showing how you can build Docker images natively against arm64 only as well as amd64/arm64 simultaneously using some slick features provided by the folks at Shippable.  Having the ability to automate multi architecture builds with CI is really powerful because it avoids having to use other tools or tricks which can complicate the process.

Shippable recently announced integrated support for arm64 builds.  The steps for creating these cross platform builds is fairly straight forward and is documented on their website.  The only downside to using this method is that currently you must explicitly contact Shippable and requests access to use the arm64 pool of nodes for running jobs, but after that multi arch builds should be available.

For reference, here is the full shippable.yml file I used to test out the various types of builds and their options.

Arm64 only builds

After enabling the shippable_shared_aarch64 node pool (from the instruction above) you should have access to arm64 builds, just add the following block to your shippable.yml file.

runtime:
  nodePool: shippable_shared_aarch64

The only other change that needs to be made is to point the shippable.yaml file at the newly added node pool and you should be ready to build on arm64.  You can use the default “managed” build type in Shippable to create builds.

Below I have a very simple example shippable.yml file for building a Dockerfile and pushing its image to my Dockerhub account.  The shippable.yml file for this build lives in the GitHub repo I configured Shippable to track.

language: none

runtime:
  nodePool:
    - shippable_shared_aarch64
    - default_node_pool

build:

  ci:
    - sed -i 's|registry.fedoraproject.org/||' Dockerfile.fedora-28
    - docker build -t local/freeipa-server -f Dockerfile.fedora-28 .
    - tests/run-master-and-replica.sh local/freeipa-server

  post_ci:
    - docker tag local/freeipa-server jmreicha/freeipa-server:test
    - docker push jmreicha/freeipa-server:test

integrations:
  hub:
    - integrationName: dockerhub
      type: dockerRegistryLogin

Once you have a shippable.yml file in a repo that you would like to track and also have things set up on the Shippable side, then every time a commit/merge happens on the master branch (or whatever branch you set up Shippable to track) an arm64 Docker image gets built and pushed to the Dockerhub.

Docs for settings up this CI style job can be found here.  There are many other configuration settings available to tune so I would encourage you to read the docs and also play around with the various options.

Parallel arm64 and amd64 builds

The approach for doing the simultaneous parallel builds is a little bit different and adds a little bit more complexity, but I think is worth it for the ability to automate cross platform builds.  There are a few things to note about the below configuration.  You can use templates in either style job.  Also, notice the use of the shipctl command.  This tool basically allows you to mimic some of the other functionality that exists in the default runCI jobs, including the ability to login to Docker registries via shell commands and manage other tricky parts of the build pipeline, like moving into the correct directory to build from.

Most of the rest of the config is pretty straight forward.  The top level jobs directive lets you create multiple different jobs, which in turn allows you to set the runtime to use different node pools, which is how we build against amd64 and arm64.  Jobs also allow for setting different environment variables among other things.  The full docs for jobs shows all of the various capabilities of these jobs.

templates: &build-test-push
  - export HUB_USERNAME=$(shipctl get_integration_field "dockerhub" "username")
  - export HUB_PASSWORD=$(shipctl get_integration_field "dockerhub" "password")
  - docker login --username $HUB_USERNAME --password $HUB_PASSWORD
  - cd $(shipctl get_resource_state "freeipa-container-gitRepo")
  - sed -i 's|registry.fedoraproject.org/||' Dockerfile.fedora-27
  - sed -i 's/^# debug:\s*//' Dockerfile.fedora-27
  - docker build -t local/freeipa-server -f Dockerfile.fedora-27 .
  - tests/run-master-and-replica.sh local/freeipa-server
  - docker tag local/freeipa-server jmreicha/freeipa-server:$arch
  - docker push jmreicha/freeipa-server:$arch

resources:
    - name: freeipa-container-gitRepo
      type: gitRepo
      integration: freeipa-container-gitRepo
      versionTemplate:
          sourceName: jmreicha/freeipa-container
          branch: master

jobs:
  - name: build_amd64
    type: runSh
    runtime:
      nodePool: default_node_pool
      container: true
    integrations:
      - dockerhub
    steps:
      - IN: freeipa-container-gitRepo
      - TASK:
          runtime:
            options:
              env:
                - privileged: --privileged
                # Also look at using SHIPPABLE_NODE_ARCHITECTURE env var
                - arch: amd64
          script:
            - *build-test-push

  - name: build_arm64
    type: runSh
    runtime:
      nodePool: shippable_shared_aarch64
      container: true
    integrations:
      - dockerhub
    steps:
      - IN: freeipa-container-gitRepo
      - TASK:
          runtime:
            options:
              env:
                - privileged: --privileged
                - arch: arm64
          script:
            - *build-test-push

As you can see, there is a lot more manual configuration going on here than the first job.

I decided to use the top level templates directive to basically DRY the configuration so that it can be reused.  I am also setting environment variables per job to ensure the correct architecture gets built and pushed for the various platforms.  Otherwise the configuration is mostly straight forward.  The confusion with these types of jobs if you haven’t set them up before mostly comes from figuring out where things get configured in the Shippable UI.

Conclusion

I must admit, Shippable is really easy to get started with, has good support and has good documentation.  I am definitely a fan and will recommend and use their products whenever I get a chance.  If you are familiar with Travis then using Shippable is easy.  Shippable even supports the use of Travis compatible environment variables, which makes porting over Travis configs really easy.  I hope to see more platforms and architectures supported in the future but for now arm64 is a great start.

There are some downside to using the parallel builds for multi architecture builds.  Namely there is more overhead in setting up the job initially.  With the runSh (and other unmanaged jobs) you don’t really have access to some of the top level yml declarations that come with managed jobs, so you will need to spend more time figuring out how to wire up the logic manually using shell commands and the shipctl tool as depicted in my above example.  This ends up being more flexible in the long run but also harder to understand and get working to begin with.

Another downside of the assembly line style jobs like runSh is that they currently can’t leverage all the features that the runCI job can, including the matrix generation (though there is a feature request to add it in the future) and report parsing.

The last downside when setting up unmanaged jobs is trying to figure out how to wire up the different components on the Shippable side of things.  For example you don’t just create a runCI job like the first example.  You have to first create an integration with the repo that you are configuring so that shippable can make an rSync and serveral runSh jobs to connect with the repo and be able to work correctly.

Overall though, I love both of the runSh and runCI jobs.  Both types of jobs lend themselves to being flexible and composable and are very easy to work with.  I’d also like to mention that the support has been excellent, which is a big deal to me.  The support team was super responsive and helpful trying to sort out my issues.  They even opened some PRs on my test repo to fix some issues.  And as far as I know, there are no other CI systems currently offering native arm64 builds which I believe will become more important as the arm architecture continues to gain momentum.

Read More

Exploring Docker Manifests

As part of my recent project to build an ARM based Kubernetes cluster (more on that in a different post) I have run into quite a few cross platform compatibility issues trying to get containers working in my cluster.

After a little bit of digging, I found that support was added in version 2.2 of the Docker image specification for manifests, which all Docker images to built against different platforms, including arm and arm64.  To add to this, I just recently discovered that in newer versions of Docker, there is a manifest sub-command that you can enable as an experimental feature to allow you to interact with the image manifests.  The manifest command is great for exploring Docker images without having to pull and run and test them locally or fighting with curl to get this information about an image from a Docker registry.

Enable the manifest command in Docker

First, make sure to have a semi recent version of Docker installed, I’m using 18.03.1 in this post.

Edit your docker configuration file, usually located in ~/.docker/config.json.  The following example assumes you have authentication configured, but really the only additional configuration needed is the { “experimental”: “enabled” }.

{
  "experimental": "enabled",
    "auths": {
    "https://index.docker.io/v1/": {
      "auth": "XXX"
    }
  }
}

After adding the experimental configuration to the client you should be able to access the docker manifest commands.

docker manifest -h

To inspect a manifest just provide an image to examine.

docker manifest inspect traefik

This will spit out a bunch of information about the Docker image, including schema, platforms, digests, etc.  which can be useful for finding out which platforms different images support.

{
   "schemaVersion": 2,
   "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
   "manifests": [
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 739,
         "digest": "sha256:36df85f84cb73e6eee07767eaad2b3b4ff3f0a9dcf5e9ca222f1f700cb4abc88",
         "platform": {
            "architecture": "amd64",
            "os": "linux"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 739,
         "digest": "sha256:f98492734ef1d8f78cbcf2037c8b75be77b014496c637e2395a2eacbe91e25bb",
         "platform": {
            "architecture": "arm",
            "os": "linux",
            "variant": "v6"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 739,
         "digest": "sha256:7221080406536c12abc08b7e38e4aebd811747696a10836feb4265d8b2830bc6",
         "platform": {
            "architecture": "arm64",
            "os": "linux",
            "variant": "v8"
         }
      }
   ]
}

As you can see above image (traefik) supports arm and arm64 architectures.  This is a really handy way for determining if an image works across different platforms without having to pull an image and trying to run a command against it to see if it works.  The manifest sub command has some other useful features that allow you to create, annotate and push cross platform images but I won’t go into details here.

Manifest tool

I’d also like to quickly mention the Docker manifest-tool.  This tool is more or less superseded by the built-in Docker manifest command but still works basically the same way, allowing users to inspect, annotate, and push manifests.  The manifest-tool has a few additional features and supports several registries other than Dockerhub, and even has a utility script to see if a given registry supports the Docker v2 API and 2.2 image spec.  It is definitely still a good tool to look at if you are interested in publishing multi platform Docker images.

Downloading the manifest tool is easy as it is distributed as a Go binary.

curl -OL https://github.com/estesp/manifest-tool/releases/download/latest/manifest-tool-linux-amd64
mv manifest-tool-linux-amd64 manifest-tool
chmod +x manifest-tool

One you have the manifest-tool set up you can start usuing it, similar to the manifest inspect command.

./manifest-tool inspect traefik

This will dump out information about the image manifest if it exists.

Name:   traefik (Type: application/vnd.docker.distribution.manifest.list.v2+json)
Digest: sha256:eabb39016917bd43e738fb8bada87be076d4553b5617037922b187c0a656f4a4
 * Contains 3 manifest references:
1    Mfst Type: application/vnd.docker.distribution.manifest.v2+json
1       Digest: sha256:e65103d16ded975f0193c2357ccf1de13ebb5946894d91cf1c76ea23033d0476
1  Mfst Length: 739
1     Platform:
1           -      OS: linux
1           - OS Vers:
1           - OS Feat: []
1           -    Arch: amd64
1           - Variant:
1           - Feature:
1     # Layers: 2
         layer 1: digest = sha256:03732cc4924a93fcbcbed879c4c63aad534a63a64e9919eceddf48d7602407b5
         layer 2: digest = sha256:6023e30b264079307436d6b5d179f0626dde61945e201ef70ab81993d5e7ee15

2    Mfst Type: application/vnd.docker.distribution.manifest.v2+json
2       Digest: sha256:6cb42aa3a9df510b013db2cfc667f100fa54e728c3f78205f7d9f2b1030e30b2
2  Mfst Length: 739
2     Platform:
2           -      OS: linux
2           - OS Vers:
2           - OS Feat: []
2           -    Arch: arm
2           - Variant: v6
2           - Feature:
2     # Layers: 2
         layer 1: digest = sha256:8996ab8c9ae2c6afe7d318a3784c7ba1b1b72d4ae14cf515d4c1490aae91cab0
         layer 2: digest = sha256:ee51eed0bc1f59a26e1d8065820c03f9d7b3239520690b71fea260dfd841fba1

3    Mfst Type: application/vnd.docker.distribution.manifest.v2+json
3       Digest: sha256:e12dd92e9ae06784bd17d81bd8b391ff671c8a4f58abc8f8f662060b39140743
3  Mfst Length: 739
3     Platform:
3           -      OS: linux
3           - OS Vers:
3           - OS Feat: []
3           -    Arch: arm64
3           - Variant: v8
3           - Feature:
3     # Layers: 2
         layer 1: digest = sha256:78fe135ba97a13abc86dbe373975f0d0712d8aa6e540e09824b715a55d7e2ed3
         layer 2: digest = sha256:4c380abe0eadf15052dc9ca02792f1d35e0bd8a2cb1689c7ed60234587e482f0

Likewise, you can annotate and push image manifests using the manifest-tool.  Below is an example command for pushing multiple image architectures.

./manifest-tool --docker-cfg '~/.docker' push from-args --platforms "linux/amd64,linux/arm64" --template jmreicha/example:test --target "jmreicha/example:test"

mquery

I’d also like to touch quickly on the mquery tool.  If you’re only interested in seeing if a Docker image uses manifest as well as high level multi-platform information you can run this tool as a container.

docker run --rm mplatform/mquery traefik

Here’s what the output might look like.  Super simple but useful for quickly getting platform information.

Image: traefik
 * Manifest List: Yes
 * Supported platforms:
   - linux/amd64
   - linux/arm/v6
   - linux/arm64/v8

This can be useful if you don’t need a solution that is quite as heavy as manifest-tool or enabling the built in Docker experimental support.

You will still need to figure out how to build the image for each architecture first before pushing, but having the ability to use one image for all architectures is a really nice feature.

There is work going on in the Docker and Kubernetes communities to start leveraging the features of the 2.2 spec to create multi platform images using a single name.  This will be a great boon for helping to bring ARM adoption to the forefront and will help make the container experience on ARM much better going forward.

Read More

Test Rancher 2.0 using Minikube

If you haven’t heard yet, Rancher recently revealed news that they will be building out a new v2.0 of their popular container orchestration and management platform to be built specifically to run on top of Kubernetes.  In the container realm, Kubernetes has recently become a clear favorite in the battle of orchestration and container management.  There are still other options available, but it is becoming increasingly clear that Kubernetes has the largest community, user base and overall feature set so a lot of the new developments are building onto Kubernetes rather than competing with it directly.  Ultimately I think this move to build on Kubernetes will be good for the container and cloud community as companies can focus more narrowly now on challenges tied specifically around security, networking, management, etc, rather than continuing to invent ways to run containers.

With Minikube and the Docker for Mac app, testing out this new Rancher 2.0 functionality is really easy.  I will outline the (rough) process below, but a lot of the nuts and bolts are hidden in Minikube and Rancher.  So if you’re really interested in learning about what’s happening behind the scenes, you can take a look at the Minikube and Rancher logs in greater detail.

Speaking of Minkube and Rancher, there are a few low level prerequisites you will need to have installed and configured to make this process work smoothly, which are listed out below.

Prerequisites

  • Tested on OSX
  • Get Minikube working – use the Kubernetes/Minikube notes as a reference (you may need to bump memory to 4GB)
  • Working version of kubectl
  • Install and configure docker for mac app

I won’t cover the installation of these perquisites, but I have blogged about a few of them before and have provided links above for instructions on getting started if you aren’t familiar with any of them.

Get Rancher 2.0 working locally

The quick start guide on the Rancher website has good details for getting this working – http://rancher.com/docs/rancher/v2.0/en/quick-start-guide/.  On OSX you can use the Docker for Mac app to get a current version of Docker and compose.  After Docker is installed, the following command will start the Rancher container for testing.

docker run -d --restart=unless-stopped -p 8080:8080 --name rancher-server rancher/server:preview

Check that you can access the Rancher 2.0 UI by navigating to http://localhost:8080 in your browser.

If you wanted to dummy a host name to make access a little bit easier you could just add an extra entry to /etc/hosts.

Import Minikube

You can import an existing cluster into the Rancher environment.  Here we will import the local Minikube instance we got going earlier so we can test out some of the new Rancher 2.0 functionality.  Alternately you could also add a host from a cloud provider.

In Rancher go to Hosts, Use Existing Kubernetes.

Use existing Kubernetes

Then grab the IP address that your local machine is using on your network.  If you aren’t familiar, on OSX you can reach into the terminal and type “ifconfig” and pull out the IP your machine is using.  Also make sure to set the port to 8080, unless you otherwise modified the port map earlier when starting Rancher.

host registration url

Registering the host will generate a command to run that applies configuration on the Kubernetes cluster.  Just copy this kubectl command in Rancher and run it against your Minikube machine.

kubectl url

The above command will join Minikube into the Rancher environment and allow Rancher to manage it.  Wait a minute for the Rancher components (mainly the rancher-agent continer/pod) to bootstrap into the Minikube environment.  Once everything is up and running, you can check things with kubectl.

kubectl get pods --all-namespaces | grep rancher

Alternatively, to verify this, you can open the Kubernetes dashboard with the “minikube dashboard” command and see the rancher-agent running.

kubernetes dashboard

On the Rancher side of things, after a few minutes, you should see the Minikube instance show up in the Rancher UI.

rancher dashboard

That’s it.  You now have a working Rancher 2.0 instance that is connected to a Kubernetes cluster (Minikube).  Getting the environment to this point should give you enough visibility into Rancher and Kubernetes to start tinkering and learning more about the new features that Rancher 2.0 offers.

The new Rancher 2.0 UI is nice and simplifies a lot of the painful aspects of managing and administering a Kubernetes cluster.  For example, on each host, there are metrics for memory, cpu, disk, etc. as well as specs about the server and its hardware.  There are also built in conveniences for dealing with load balancers, secrets and other components that are normally a pain to deal with.  While 2.0 is still rough around the edges, I see a lot of promise in the idea of building a management platform on top Kubernetes to make administrative tasks easier, and you can still exec to the container for the UI and check logs easily, which is one of my favorite parts about Rancher.  The extra visualization is a nice touch for folks that aren’t interested in the CLI or don’t need to know how things work at a low level.

When you’re done testing, simply stop the rancher container and start it again whenever you need to test.  Or just blow away the container and start over if you want to start Rancher again from scratch.

Read More

Templated Nginx configuration with Bash and Docker

Shoutout to @shakefu for his Nginx and Bash wizardry in figuring a lot of this stuff out.  I’d like to take credit for this, but he’s the one who got a lot of it working originally.

Sometimes it can be useful to template Nginx files to use environment variables to fine tune and adjust control for various aspects of Nginx.  A recent example of this idea that I recently worked on was a scenario where I setup an Nginx proxy with a very bare bones configuration.  As part of the project, I wanted a quick and easy way to update some of the major Nginx configurations like the port it uses to listen for traffic, the server name, upstream servers, etc.

It turns out that there is a quick and dirty way to template basic Nginx configurations using Bash, which ended up being really useful so I thought I would share it.  There are a few caveats to this method but it is definitely worth the effort if you have a simple setup or a setup that requires some changes periodically.  I stuck the configuration into a Dockerfile so that it can be easily be updated and ported around – by using the nginx:alpine image as the base image the total size all said and done is around 16MB.  If you’re not interested in the Docker bits, feel free to skip them.

The first part of using this method is to create a simple configuration file that will be used to substitute in some environment variables.  Here is a simple template that is useful for changing a few Nginx settings.  I called it nginx.tmpl, which will be important for how the template gets rendered later.

events {}

http {
  error_log stderr;
  access_log /dev/stdout;

  upstream upstream_servers {
    server ${UPSTREAM};
  }

  server {
    listen ${LISTEN_PORT};
    server_name ${SERVER_NAME};
    resolver ${RESOLVER};
    set ${ESC}upstream ${UPSTREAM};

    # Allow injecting extra configuration into the server block
    ${SERVER_EXTRA_CONF}

    location / {
       proxy_pass ${ESC}upstream;
    }
  }
}

The configuration is mostly straight forward.  We are basically just using this configuration file and inserting a few templated variables denoted by the ${VARIABLE} syntax, which are just environment variables that get inserted into the configuration when it gets bootstrapped.  There are a few “tricks” that you may need to use if your configuration starts to get more complicated.  The first is the use of the ${ESC} variable.  Nginx uses the ‘$’ for its variables, which also is used by the template.  The extra ${ESC} basically just gives us a way to escape that $ so that we can use Nginx variables as well as templated variables.

The other interesting thing that we discovered (props to shakefu for this magic) was that you can basically jam arbitrary server block level configurations into an environment variable.  We do this with the ${SERVER_EXTRA_CONF} in the above configuration and I will show an example of how to use that environment variable later.

Next, I created a simple Dockerfile that provides some default values for some of the various templated variables.  The Dockerfile aslso copies the templated configuration into the image, and does some Bash magic for rendering the template.

FROM nginx:alpine

ENV LISTEN_PORT=8080 \
  SERVER_NAME=_ \
  RESOLVER=8.8.8.8 \
  UPSTREAM=icanhazip.com:80 \
  UPSTREAM_PROTO=http \
  ESC='$'

COPY nginx.tmpl /etc/nginx/nginx.tmpl

CMD /bin/sh -c "envsubst < /etc/nginx/nginx.tmpl > /etc/nginx/nginx.conf && nginx -g 'daemon off;' || cat /etc/nginx/nginx.conf"

There are some things to note.  First, not all of the variables in the template need to be declared in the Dockerfile, which means that if the variable isn’t set it will be blank in the rendered template and just won’t do anything.  There are some variables that need defaults, so if you ever run across that scenario you can just add them to the Dockerfile and rebuild.

The other interesting thing is how the template gets rendered.  There is a tool built into the shell called envsubst that substitutes the values of environment variables into files.  In the Dockerfile, this tool gets executed as part of the default command, taking the template as the input and creating the final configuration.

/bin/sh -c "envsubst < /etc/nginx/nginx.tmpl > /etc/nginx/nginx.conf

Nginx gets started in a slightly silly way so that daemon mode can be disabled (we want Nginx running in the foreground) and if that fails, the rendered template gets read to help look for errors in the rendered configuration.

&& nginx -g 'daemon off;' || cat /etc/nginx/nginx.conf"

To quickly test the configuration, you can create a simple docker-compose.yml file with a few of the desired environment variables, like I have below.

version: '3'
services:
  nginx_proxy:
    build:
      context: .
      dockerfile: Dockerfile
    # Only test the configuration
    #command: /bin/sh -c "envsubst < /etc/nginx/nginx.tmpl > /etc/nginx/nginx.conf && cat /etc/nginx/nginx.conf"
    volumes:
      - "./nginx.tmpl:/etc/nginx/nginx.tmpl"
    ports:
      - 80:80
    environment:
    - SERVER_NAME=_
    - LISTEN_PORT=80
    - UPSTREAM=test1.com
    - UPSTREAM_PROTO=https
    # Override the resolver
    - RESOLVER=4.2.2.2
    # The following would add an escape if it isn't in the Dockerfile
    # - ESC=$$

Then you can bring up Nginx server.

docker-compose up

The configuration doesn’t get rendered until the container is run, so to test the configuration only, you could add in a command in the docker-compose file that renders the configuration and then another command that spits out the rendered configuration to make sure it looks right.

If you are interested in adding additional configuration you can use the ${SERVER_EXTRA_CONF} as eluded to above.  An example of this extra configuration can be assigned to the environment variable.  Below is an arbitrary snippet that allows for connections to do long polling to Nginx, which basically means that Nginx will try to hold the connection open for existing connections for longer.

error_page 420 = @longpoll;
if ($arg_wait = "true") { return 420; }
}
location @longpoll {
# Proxy requests to upstream
proxy_pass $upstream;
# Allow long lived connections
proxy_buffering off;
proxy_read_timeout 900s;
keepalive_timeout 160s;
keepalive_requests 100000;

The above snipped would be a perfectly valid environment variable as far as the container is concerned, it will just look a little bit weird to the eye.

nginx proxy environment variables

That’s all I’ve got for now.  This minimal templated Nginx configuration is handy for testing out simple web servers, especially for proxies and is also nice to port around using Docker.

Read More