Deploy AWS SSM agent to CoreOS

If you have been a CoreOS user for long you will undoubtedly have noticed that there is no real package management system.   If you’re not familiar, the philosophy of CoreOS is to avoid using a package manager and instead rely heavily on leveraging the power of Docker containers along with a few system level tools to manage servers.  The problem that I just recently stumbled across is that the AWS SSM agent is packaged into debian and RPM formats and is assumed to be installed with a package manager, which obviously won’t work on CoreOS.  In the remainder of this post I will describe the steps that I took to get the SSM agent working on a CoreOS/Dockerized server.  Overall I am very happy with how well this solution turned out.

To get started, there is a nice tutorial here for using the AWS Session Manager through the the console.  The most important thing that needs to be done before “installing” the SSM agent on the CoreOS host is to set up the AWS instance with the correct permissions for the agent to be able to communicate with AWS.  For accomplishing this, I created a new IAM role and attached the AmazonEC2RoleForSSM policy to it through the AWS console.

After this step is done, you can bring up the ssm-agent.

Install the ssm-agent

After ensuring the correct permissions have been applied to the server that is to be manager, the next step is to bring up the agent.  To do this using Docker, there are some tricks that need to be used to get things working correctly, notably, fixing the PID 1 zombie reaping problem that Docker has.

I basically lifted the Dockerfile from here originally and adapted it into my own public Docker image at jmreicha/ssm-agent:latest.  In case readers want to go try this, my image is a little bit newer than the original source and has a few tweaks.  The Dockerfile itself is mostly straight forward, the main difference is that the ssm-agent process won’t reap child processes in the default Debian image.

In order to work around the child reaping problem I substituted the slick Phusion Docker baseimage, which has a very simple process manager that allows shells spawned by the ssm-agent to be reaped when they get terminated.  I have my Dockerfile hosted here if you want to check out how the phusion baseimage version works.

Once the child reaping problem was solved, here is the command I initially used to spin up the container, which of course still didn’t work out of the box.

docker run \
  -v /var/run/dbus:/var/run/dbus \
  -v /run/systemd:/run/systemd \
 jmreicha/ssm-agent:latest

I received the following errors.

2018-11-05 17:42:27 INFO [OfflineService] Starting document processing engine...
2018-11-05 17:42:27 INFO [OfflineService] [EngineProcessor] Starting
2018-11-05 17:42:27 INFO [OfflineService] [EngineProcessor] Initial processing
2018-11-05 17:42:27 INFO [OfflineService] Starting message polling
2018-11-05 17:42:27 INFO [OfflineService] Starting send replies to MDS
2018-11-05 17:42:27 INFO [LongRunningPluginsManager] starting long running plugin manager
2018-11-05 17:42:27 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute
2018-11-05 17:42:27 INFO [HealthCheck] HealthCheck reporting agent health.
2018-11-05 17:42:27 INFO [MessageGatewayService] Starting session document processing engine...
2018-11-05 17:42:27 INFO [MessageGatewayService] [EngineProcessor] Starting
2018-11-05 17:42:27 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck
2018-11-05 17:42:27 INFO [StartupProcessor] Executing startup processor tasks
2018-11-05 17:42:27 INFO [StartupProcessor] Unable to open serial port /dev/ttyS0: open /dev/ttyS0: no such file or directory
2018-11-05 17:42:27 INFO [StartupProcessor] Attempting to use different port (PV): /dev/hvc0
2018-11-05 17:42:27 INFO [StartupProcessor] Unable to open serial port /dev/hvc0: open /dev/hvc0: no such file or directory
2018-11-05 17:42:27 ERROR [StartupProcessor] Error opening serial port: open /dev/hvc0: no such file or directory
2018-11-05 17:42:27 ERROR [StartupProcessor] Error opening serial port: open /dev/hvc0: no such file or directory. Retrying in 5 seconds...
2018-11-05 17:42:27 INFO [MessageGatewayService] Successfully created ssm-user
2018-11-05 17:42:27 ERROR [MessageGatewayService] Failed to add ssm-user to sudoers file: open /etc/sudoers.d/ssm-agent-users: no such file or directory
2018-11-05 17:42:27 INFO [MessageGatewayService] [EngineProcessor] Initial processing
2018-11-05 17:42:27 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0d33006836710e7ef, requestId: 2975fe0d-846d-4256-9d50-57932be03925
2018-11-05 17:42:27 INFO [MessageGatewayService] listening reply.
2018-11-05 17:42:27 INFO [MessageGatewayService] Opening websocket connection to: %!(EXTRA string=wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0d33006836710e7ef?role=subscribe&stream=input)
2018-11-05 17:42:27 INFO [MessageGatewayService] Successfully opened websocket connection to: %!(EXTRA string=wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0d33006836710e7ef?role=subscribe&stream=input)
2018-11-05 17:42:27 INFO [MessageGatewayService] Starting receiving message from control channel
2018-11-05 17:42:32 INFO [StartupProcessor] Unable to open serial port /dev/ttyS0: open /dev/ttyS0: no such file or directory
2018-11-05 17:42:32 INFO [StartupProcessor] Attempting to use different port (PV): /dev/hvc0
2018-11-05 17:42:32 INFO [StartupProcessor] Unable to open serial port /dev/hvc0: open /dev/hvc0: no such file or directory
2018-11-05 17:42:32 ERROR [StartupProcessor] Error opening serial port: open /dev/hvc0: no such file or directory
2018-11-05 17:42:32 ERROR [StartupProcessor] Error opening serial port: open /dev/hvc0: no such file or directory. Retrying in 5 seconds...
2018-11-05 17:42:35 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds.

The first error that jumped out in logs is the “Unable to open serial port”.  There is also an error referring to not being able to add the ssm-user to the sudoers file.

The fix for these issues is to add a Docker flag to the CoreOS serial device, “–device=/dev/ttyS0” and a volume mount to the sudoers path, “-v /etc/sudoers.d:/etc/sudoers.d”.  The full Docker run command is shown below.

docker run -d --restart unless-stopped --name ssm-agent \
  --device=/dev/ttyS0 \
  -v /var/run/dbus:/var/run/dbus \
  -v /run/systemd:/run/systemd \
  -v /etc/sudoers.d:/etc/sudoers.d \
  jmreicha/ssm-agent:latest

After fixing the errors found in the logs, and bringing up the containerized SSM agent, go ahead and create a new session in the AWS console.

ssm session

The session should come up pretty much immediately and you should be able to run commands like you normally would.

The last thing to (optionally) do is run the agent as a systemd service to take advantage of some capabilities to start it up automatically if it dies or start it if the server gets rebooted.  You can probably just get away with using the docker restart policy too if you aren’t interested in configuring a systemd service, which is what I have chosen to do for now.

You could even adapt this Docker image into a Kubernetes manifest and run it as a daemonset on each node of the cluster if desired to simplify things and add another layer of security.  I may return to the systemd unit and/or Kubernetes manifest in the future if readers are interested.

Conclusion

session history

The AWS Session manager is a fantastic tool for troubleshooting/debugging as well as auditing and security.

With SSM you can make sure to never expose specific servers to the internet directly, and you can also keep track of what kinds of commands have been run on the server.  As a bonus, the AWS console helps keeps track of all the previous sessions that were created and if you hook up to Cloudwatch and/or S3 you can see all the commands and times that they were run with nice simple links to the log files.

SSM allows you to do a lot of other cool stuff like run scripts against either a subset of servers which can be filtered by tags or against all servers that are recognized by SSM.  I’m sure there are some other features as well, I just haven’t found them yet.

Read More

Configure S3 to store load balancer logs using Terraform

If you’ve ever encountered the following error (or similar) when setting up an AWS load balancer to write its logs to an s3 bucket using Terraform then you are not alone.  I decided to write a quick note about this problem because it is the second time I have been bitten by this and had to spend time Googling around for an answer.  The AWS documentation for creating and attaching the policy makes sense but the idea behind why you need to do it is murky at best.

aws_elb.alb: Failure configuring ELB attributes: InvalidConfigurationRequest: Access Denied for bucket: <my-bucket> Please check S3bucket permission
status code: 409, request id: xxxx

For reference, here are the docs for how to manually create the policy by going through the AWS console.  This method works fine for manually creating and attaching to the policy to the bucket.  The problem is that it isn’t obvious why this needs to happen in the first place and also not obvious to do in Terraform after you figure out why you need to do this.  Luckily Terraform has great support for IAM, which makes it easy to configure the policy and attach it to the bucket correctly.

Below is an example of how you can create this policy and attach it to your load balancer log bucket.

data "aws_elb_service_account" "main" {}

data "aws_iam_policy_document" "s3_lb_write" {
    policy_id = "s3_lb_write"

    statement = {
        actions = ["s3:PutObject"]
        resources = ["arn:aws:s3:::<my-bucket>/logs/*"]

        principals = {
            identifiers = ["${data.aws_elb_service_account.main.arn}"]
            type = "AWS"
        }
    }
}

Notice that you don’t need to explicitly define the principal like you do when setting up the policy manually.  Just use the ${data.aws_elb_service_account.main.arn} variable and Terraform will figure out the region that the bucket is in and pick out the correct parent ELB ID to attach to the policy.  You can verify this by checking the table from the link above and cross reference it with the Terraform output for creating and attaching the policy.

You shouldn’t need to update anything in the load balancer config for this to work, just rerun the failed command again and it should work.  For completeness here is what that configuration might look like.

...
access_logs {
    bucket = "${var.my_bucket}"
    prefix = "logs"
    enabled = true
}
...

This process is easy enough but still begs the question of why this seemingly unnecessary process needs to happen in the first place?  After searching around for a bit I finally found this:

When Amazon S3 receives a request—for example, a bucket or an object operation—it first verifies that the requester has the necessary permissions. Amazon S3 evaluates all the relevant access policies, user policies, and resource-based policies (bucket policy, bucket ACL, object ACL) in deciding whether to authorize the request.

Okay, so it basically looks like when the load balancer gets created, the load balancer gets associated with an AWS owned ID, which we need to explicitly give permission to, through IAM policy:

If the request is for an operation on an object that the bucket owner does not own, in addition to making sure the requester has permissions from the object owner, Amazon S3 must also check the bucket policy to ensure the bucket owner has not set explicit deny on the object

Note

A bucket owner (who pays the bill) can explicitly deny access to objects in the bucket regardless of who owns it. The bucket owner can also delete any object in the bucket.

There we go.  There is a little bit more information in the link above but now it makes more sense.

Read More

Backup Route 53 zones

We have all heard about DNS catastrophes.  I just read about horror story on reddit the other day, where an Azure root DNS zone was accidentally deleted with no backup.  I experienced a similar disaster a few years ago – a simple DNS change managed to knock out internal DNS for an entire domain, which contained hundreds of records.  Reading the post hit close to home, uncovering some of my own past anxiety, so I began poking around for solutions.  Immediately, I noticed that backing up DNS records is usually skipped over as part of the backup process.  Folks just tend to never do it, for whatever reason.

I did discover, though, that backing up DNS is easy.  So I decided to fix the problem.

I wrote a simple shell script that dumps out all Route53 zones for a given AWS account to a json file, and uploads the zones to an S3 bucket.  The script is a handful lines, which is perfect because it doesn’t take much effort to potentially save your bacon.

If you don’t host DNS in AWS, the script can be modified to work for other DNS providers (assuming they have public API’s).

Here’s the script:

#!/usr/bin/env bash

set -e

# Dump route 53 zones to a text file and upload to S3.

BACKUP_DIR=/home/<user>/dns-backup
BACKUP_BUCKET=<bucket>
# Use full paths for cron
CLIPATH="/usr/local/bin"

# Dump all zones to a file and upload to s3
function backup_all_zones () {
  local zones
  # Enumerate all zones
  zones=$($CLIPATH/aws route53 list-hosted-zones | jq -r '.HostedZones[].Id' | sed "s/\/hostedzone\///")
  for zone in $zones; do
  echo "Backing up zone $zone"
  $CLIPATH/aws route53 list-resource-record-sets --hosted-zone-id $zone > $BACKUP_DIR/$zone.json
  done

  # Upload backups to s3
  $CLIPATH/aws s3 cp $BACKUP_DIR s3://$BACKUP_BUCKET --recursive --sse
}

# Create backup directory if it doesn't exist
mkdir -p $BACKUP_DIR
# Backup up all the things
time backup_all_zones

Be sure to update the <user> and <bucket> in the script to match your own environment settings.  Dumping the DNS records to json is nice because it allows for a more programmatic way of working with the data.  This script can be run manually, but is much more useful if run automatically.  Just add the script to a cronjob and schedule it to dump DNS periodically.

For this script to work, the aws cli and jq need to be installed.  The installation is skipped in this post, but is trivial.  Refer to the links for instructions.

The aws cli needs to be configured to use an API key with read access from Route53 and the ability to write to S3.  Details are skipped for this step as well – be sure to consult the AWS documentation on setting up IAM permissions for help with setting up API keys.  Another, simplified approach is to use a pre-existing key with admin credentials (not recommended).

Read More

Create a Kubernetes cluster on AWS and CoreOS with Terraform

Up until my recent discovery of Terraform, the process I had been using to test CoreOS and Kubernetes was somewhat cumbersome and manual.  There are still some manual steps and processes involved in the bootstrap and cluster creation process that need to get sorted out, but now I can bring environments up and down, quickly and automatically.  This is a HUGE time saver and also makes testing easier because these changes can happen in a matter of minutes rather than hours and can all be self documented for others to reference in a Github repo.  Great success.

NOTE:  This method seems to be broken as of the 0.14.2 release of Kubernetes.  The latest version I could get to work reliably was v0.13.1.  I am following the development and looking forward to the v1.0 release but won’t revisit this method until something stable has been shipped because there are still just too many changes going on.  With that said, v0.13.1 has a lot of useful functionality and this method is actually really easy to get working once you have the groundwork laid out.

Another benefit is that as the project develops and matures, the only thing that will need modified are the cloud configs I am using here.  So if you follow along you can use my configs as a template, feel free to use this as a base and modify the configs to get this working with a newer release.  As I said I will be revisiting the configs once things slow down a little and a v1 has been released.

Terraform

So the first component that we need to enable in this workflow is Terraform.  From their site, “Terraform is a tool for building, changing, and combining infrastructure safely and efficiently.”  Basically, Terraform is a command line tool for allowing you to implement your infrastructure as code across a variety of different infrastructure providers.  It should go without saying, being able to test environments across different platforms and cloud providers is a gigantic benefit.  It doesn’t lock you in to any one vendor and greatly helps simplify the process of creating complex infrastructures across different platforms.

Terraform is still a young project but has been maturing nicely and currently supports most of the functionality needed for this method to work (the missing stuff is in the dev pipeline and will be released in the near future).  Another benefit is that Terraform is much easier to use and understand than CloudFormation, which is  a propriety cloud provisioning tool available to AWS customers, which could be used if you are in a strictly AWS environment.

The first step is to download and install Terraform.  In this example I am using OSX but the instructions will be similar on Linux or other platforms.

cd /tmp
wget https://dl.bintray.com/mitchellh/terraform/terraform_0.3.7_darwin_amd64.zip
unzip terraform_0.3.7_darwin_amd64.zip
mv terraform* /usr/local/bin

After you have moved the binary you will need to source your shell.  I use zsh so I just ran “source ~/.zshrc” to update the path for terraform.

To test terraform out you can check the version to make sure it works.

terraform version

Now that Terraform is installed you will need to get some terraform files set up.  I suggest making a local terraform directory on your machine so you can create a repo out of it later if desired.  I like to split “services” up by creating different directories.  So within the terraform directory I have created a coreos directory as well as a kubernetes directory, each with their own variables file (which should be very similar).  I don’t know if this approach is a best practice but has been working well for me so far.  I will gladly update this workflow if there is a better way to do this.

Here is a sample of what the file and directory layout might look like.

cloud-config
  etcd-1.yml
  etcd-2.yml
  etcd-3.yml
  kube-master.yml
  kube-node.yml
etcd
  dns.tf
  etcd.tf
  variables.tf
kubernetes
  dns.tf
  kubernetes.tf
  variables.tf

As you can see there is a directory for Etcd as well as Kubernetes specific configurations.  You may also notice that there is a cloud-config directory.  This will be used as a central place to put configurations for the different services.

Etcd

With Terraform set up, the next component needed for this architecture to work is a functioning etcd cluster. I chose to use a separate 3 node cluster (spread across 3 AZ’s) for improved performance and resliency.  If one of the nodes goes down or away with a 3 node cluster it will still be operational, where if a 1 node cluster goes away you will be in much more trouble.  Additionally if you have other services or servers that need to leverage etcd you can just point them to this etcd cluster.

Luckily, with Terraform it is dead simple to spin up and down new clusters once you have your initial configurations set up and configured correctly.

At the time of this writing I am using the current stable version of CoreOS, which is 633.1.0, which uses version 0.4.8 of etcd.  According to the folks at CoreOS, the cloud configs for old versions of etcd should work once the new version has been released so moving to a the new 2.0 release should be easy once it hits the release channel but some tweaks or additional changes to the cloud configs may need to occur.

Configuration

Before we get in to the details of how all of this works, I would like to point out that many of the settings in these configuration files will be specific to users environments.  For example I am using an AWS VPC in the “us-east-1” region for this set up, so you may need to adjust some of the settings in these files to match your own scenario.  Other custom components may include security groups, subnet id’s, ssh keys, availability zones, etc.

Terraform offers resources for basically all network components on AWS so you could easily extend these configurations to build out your initial network and environment if you were starting a project like this from scratch.  You can check all the Terraform resources for the AWS provider here.

Warning: This guide assumes a few subtle things to work correctly.  The address scheme we are using for this environment is a 192.168.x.x, leveraging 3 subnets to spread the nodes out across for additional availability (b, c, e) in the US-East-1 AWS region.  Anything in the configuration that has been filled in with “XXX” represents a custom value that you will need to either create or obtain in your own environment and modify in the configuration files.

Finally, you will need to provide AWS credentials to allow Terraform to communicate with the API for creating and modifying resources.  You can see where these credentials should be filled in below in the variables.tf file.

variables.tf

variable "access_key" { 
 description = "AWS access key"
 default = "XXX"
}

variable "secret_key" { 
 description = "AWS secret access key"
 default = "XXX"
}

variable "region" {
 default = "us-east-1"
}

/* CoreOS AMI - 633.1.0 */

variable "amis" {
 description = "Base CoreOS AMI"
 default = {
 us-east-1 = "ami-d6033bbe" 
 }
}

Here is what an example CoreOS configs look like.

etcd.tf

provider "aws" {
 access_key = "${var.access_key}"
 secret_key = "${var.secret_key}"
 region = "${var.region}"
}

/* Etcd cluster */

resource "aws_instance" "etcd-01" {
 ami = "${lookup(var.amis, var.region)}"
 availability_zone = "us-east-1e" 
 instance_type = "t2.micro"
 subnet_id = "XXX"
 security_groups = ["XXX"]
 key_name = XXX"
 private_ip = "192.168.1.10"
 user_data = "${file("../cloud-config/etcd-1.yml")}"

 root_block_device = {
 device_name = "/dev/xvda"
 volume_type = "gp2"
 volume_size = "20"
 } 
}

resource "aws_instance" "etcd-02" {
 ami = "${lookup(var.amis, var.region)}"
 availability_zone = "us-east-1b" 
 instance_type = "t2.micro"
 subnet_id = "XXX"
 security_groups = ["XXX"]
 key_name = "XXX"
 private_ip = "192.168.2.10"
 user_data = "${file("../cloud-config/etcd-2.yml")}"

 root_block_device = {
 device_name = "/dev/xvda"
 volume_type = "gp2"
 volume_size = "20"
 } 
}

resource "aws_instance" "etcd-03" {
 ami = "${lookup(var.amis, var.region)}"
 availability_zone = "us-east-1c" 
 instance_type = "t2.micro"
 subnet_id = "XXX"
 security_groups = ["XXX"]
 key_name = "XXX"
 private_ip = "192.168.3.10"
 user_data = "${file("../cloud-config/etcd-3.yml")}"

 root_block_device = {
 device_name = "/dev/xvda"
 volume_type = "gp2"
 volume_size = "20"
 } 
}

Below I have created a configuration file as a simaple way to create DNS records dynamically when spinning up the etcd cluster nodes.

dns.tf

 resource "aws_route53_record" "etcd-01" {
 zone_id = "XXX"
 name = "etcd-01.example.domain"
 type = "A"
 ttl = "300"
 records = ["${aws_instance.etcd-01.private_ip}"]
}

resource "aws_route53_record" "etcd-02" {
 zone_id = "XXX"
 name = "etcd-02.example.domain"
 type = "A"
 ttl = "300"
 records = ["${aws_instance.etcd-02.private_ip}"]
}

resource "aws_route53_record" "etcd-03" {
 zone_id = "XXX"
 name = "etcd-03.example.domain"
 type = "A"
 ttl = "300"
 records = ["${aws_instance.etcd-03.private_ip}"]
}

Once all of the configurations have been put in place and all look right you can test out what your configuration will look like with the “plan” command:

cd etcd
terraform plan

Make sure to change in to your etcd directory first.  This will examine your current configuration and calculate any changes.  If your environment is completely unconfigured then this command will return some output that explains what terraform is planning to do.

If you don’t want the input prompts when you run your plan command you can append the “-input=false” flag to bypass the configurations.

If everything looks okay with the plan you can tell Terraform to “apply” your conifgs with the following:

terraform apply
OR
terraform apply -input=false

If everything goes accordingly, after a few minutes you should have a new 3 node etcd cluster running on the lastest stable version of CoreOS with DNS records for interacting with the nodes!  To double check that the servers are being created you can check the AWS console to see if your newly defined servers have been created.  The console is a great way to double check that things all work okay and that the right values were created.

If you are having trouble with the cloud configs check the end of the post for the link to all of the etcd and Kubernetes cloud configs.

Kubernetes

The Kubernetes configuration is very similar to etcd.  It uses a variables.tf, kubernetes.tf and dns.tf file to configure the Kubernetes cluster.

The following configurations will build a v0.13.1 Kubernetes cluster with 1 master, and 3 worker nodes to begin with.  This config can be extended easily to scale the number of worker nodes to basically as many as you want (I could easily image the hundreds or thousands), simply by changing a few number in the configuration, barely adding any overhead to our current process and workflow, which is nice.  Because of these possibilities, Terraform allows for a large amount of flexibility in how you manage your infrastructure.

This configuration is using c3.large instances so be aware that your AWS bill may be affected if you spin nodes up and fail to turn them off when you are done testing.

provider "aws" {
 access_key = "${var.access_key}"
 secret_key = "${var.secret_key}"
 region = "${var.region}"
}

/* Kubernetes cluster */

resource "aws_instance" "kube-master" {
 ami = "${lookup(var.amis, var.region)}"
 availability_zone = "us-east-1e" 
 instance_type = "c3.large"
 subnet_id = "XXX"
 security_groups = ["XXX"]
 key_name = "XXX"
 private_ip = "192.168.1.100"
 user_data = "${file("../cloud-config/kube-master.yml")}"

 root_block_device = {
 device_name = "/dev/xvda"
 volume_type = "gp2"
 volume_size = "20"
 } 
}

resource "aws_instance" "kube-e" {
 ami = "${lookup(var.amis, var.region)}"
 availability_zone = "us-east-1e" 
 instance_type = "c3.large"
 subnet_id = "XXX"
 security_groups = ["XXX"]
 key_name = "XXX"
 count = "1"
 user_data = "${file("../cloud-config/kube-node.yml")}"

 root_block_device = {
 device_name = "/dev/xvda"
 volume_type = "gp2"
 volume_size = "100"
 } 
}

resource "aws_instance" "kube-b" {
 ami = "${lookup(var.amis, var.region)}"
 availability_zone = "us-east-1b" 
 instance_type = "c3.large"
 subnet_id = "XXX"
 security_groups = ["XXX"]
 key_name = "XXX"
 count = "1"
 user_data = "${file("../cloud-config/kube-node.yml")}"

 root_block_device = {
 device_name = "/dev/xvda"
 volume_type = "gp2"
 volume_size = "100"
 } 
}

resource "aws_instance" "kube-c" {
 ami = "${lookup(var.amis, var.region)}"
 availability_zone = "us-east-1c" 
 instance_type = "c3.large"
 subnet_id = "XXX"
 security_groups = ["XXX"]
 key_name = "XXX"
 count = "1"
 user_data = "${file("../cloud-config/kube-node.yml")}"

 root_block_device = {
 device_name = "/dev/xvda"
 volume_type = "gp2"
 volume_size = "100"
 } 
}

And our DNS configuration.

resource "aws_route53_record" "kube-master" {
 zone_id = "XXX"
 name = "kube-master.example.domain"
 type = "A"
 ttl = "300"
 records = ["${aws_instance.kube-master.private_ip}"]
}

resource "aws_route53_record" "kube-e" {
 zone_id = "XXX"
 name = "kube-e-test.example.domain"
 type = "A"
 ttl = "300"
 records = ["${aws_instance.kube-e.0.private_ip}"]
}

resource "aws_route53_record" "kube-b" {
 zone_id = "XXX"
 name = "kube-b.example.domain"
 type = "A"
 ttl = "300"
 records = ["${aws_instance.kube-b.0.private_ip}"]
}

resource "aws_route53_record" "kube-c" {
 zone_id = "XXX"
 name = "kube-c.example.domain"
 type = "A"
 ttl = "300"
 records = ["${aws_instance.kube-c.0.private_ip}"]
}

The variables file for Kubernetes should be identical to the etcd configuration so I have chosen not to place it here.  Just refer to the previous etcd/variables.tf file.

Resources

Since each cloud-config is slightly different (and would take up a lot more space) I have included those files in the below gist.  You will need to populate the “ssh_authorized_keys:” section with your own SSH public key and update any of the IP addresses to reflect your environment.  I apologize if there are any typo’s, there was a lot of cut and paste.

Cloud configs – https://gist.github.com/jmreicha/7923c295ab6110151127

Much of the configurations that I am using are based on the Kubernetes docs, as well as some of the specific cloud configs that I have adapted, which can be found here.

Another great place to get help with Kubernetes is the IRC channel which can be found on irc.freenode.net in the #google-containers channel.  The folks that hang out there are super friendly and can almost always answer any questions you have.

As I said, development is still pretty crazy.  You can check the releases page to check out all the latest stuff.

Conclusion

Yes this can seem very convoluted at first but if everything works how it should, you now have a quick and easy way to spin up identical etcd and/or a Kubernetes environments up or down at will, which is pretty powerful.  Also, this method is dramatically easier than most of the methods I have come across so far in my own adventures and testing.

Once you get through the initial confusion and learning curve this workflow becomes a huge timesaver for testing out different versions of Kubernetes and also for experimenting with etcd.  I haven’t quite automated the entire process but I imagine that it would be easy to spin entire environments up and down by gluing all of these pieces together with some simple shell scripts.

If you need to make any configuration updates, for example to put a new version of Kubernetes in place, you will need first update your Kubernetes master/node cloud configs and then rerun terraform apply to have it recreate your environment.

The cloud config changes will destroy any nodes that rely on the old configuration.  Therefore, you will need to make sure that if you make any changes to your cloud config files you are prepared to deal with the consequences!  Ideally you should get your etcd cluster to a good spot and then leave it alone and just play around with the Kubernetes components since both of those components have been separated in order to change the components out independently.

With this workflow you can already start to see the power of terraform even with this one example.  Terraform is quickly becoming one of my favorite automation and cloud tools and is providing a very easy way to define and build infrastructure though code and configurations.

Read More

Mount an external EBS volume in AWS

Creating and attaching external volumes is one of those things in administration that is really nice to know how to do but for me is also one that doesn’t happen every day so it is really easy to forget how to do which makes it a little bit more painful, especially with deadlines and people watching over your shoulder.  So, having said that, I think it is probably worth writing a post about how to do it because it happens just enough that I have trouble getting everything straight, and I’m sure others run in to this as well, so that’s what I will be writing about today.

There is  good documentation for how to do this but there are a lot of separate steps so consolidating the components might be helpful to readers who stumble across this.  I’m sure there are other ways to accomplish this but I don’t think it is necessary to cover everything here.

Create your “floating volume”

This step is straight forward.  In the AWS EC2 console choose the type of volume this will be (SSD or magnetic), availability zone, and any other options here.

create ebs volume

After your volume has been created you will want to attach it to an instance.  This part is important because the changes could break your OS volume if you write to your fstab file incorrectly.  In this example I am choosing to attach the EBS volume as /dev/xvdf, but you could name it differently if it corresponds to your setup.

attach ebs volume

After the volume has been mounted you can check that it has been picked up by the OS by either checking the /dev directory or by running fdisk -l and looking for the size of the disk you just attached.

It is worth pointing out that all of the steps in the AWS console can alternatively be done with the aws-cli tool.  It is probably easier but for the sake of time and illustration I am leaving those steps out.  Feel free to reach out if you are interested in the cli tool and I can update this post.

If you run fdisk -l you will notice that the device is empty, so you will need to format the disk.  In this instance I am formatting the disk as ext4.  So use the following command to format it.

sudo mkfs.ext4 /dev/xvdf

After the volume has been formatted you can mount it to your OS.

Attaching the volume

sudo mkdir /data
sudo mount -t ext4 /dev/xvdf /data

If you need to resize the filesystem for whatever reason, you can use the resize2fs command.

sudo resize2fs <mount point>
sudo resize2fs /dev/xvdf

Here you will create the directory (if it doesn’t already exist) to mount the volume to and then mount it.  At this point it would be fine to be done if you just needed temporary access to the storage on this device.  But if you want your mount to persist and to survive a reboot then you just add an entry to your /etc/fstab file to make sure the /data directory gets the volume mounted to it after a reboot.  Something like the following would work.

/dev/xvdf       /data   ext4    defaults,nobootwait        0       0

The entry is pretty easy to follow but may be confusing for those who are not familiar with how fstab works.  I will break down the various components here.

The first parameter is the location of the volume (/dev/xvdf) and is referred to as the file_system field.

The second parameter specifies where to mount the volume to (/data) and is referred to as the dir field.

The third field is the type.  This is where you specify the file system type or device to be mounted.  If you didn’t format this volume previously it would crate problems for OS when it tried to load in your volume from this file.

The fourth section is the options for the mount.  Here, the defaults,nobootwait section is very important.  If you don’t have the nobootwait option specified here then your OS could potentially hang on boot up if it couldn’t find the specified volume, so this option helps escape it if there are any problems.

The fifth field is to either enable or disable the dump option.  Unless you are familiar with or use the dump command you will almost always set this to 0.

The last section is the pass section.  This simply tells the OS if it should run an fsck or not on this volume.  Here I have it set to 0 so it doesn’t get checked but for OS volumes, this could be important to turn on.

Next steps

There are many more things you can do with fstab so if you are interested in other options for how to mount volumes you can look at the fstab documentation for more insights and information.

If you ever wanted to float this volume to another host it would be easy to do, and would not require any new or special formatting since this was already taken care of here.  The steps would looks similar to the following.

  • Unmount volume from current OS
  • Remove entry in /etc/fstab for volume mount
  • Detach mount in AWS console from current OS
  • Attach mount to new OS
  • Mount volume manually in new OS to test if it works
  • If the mount works add a new entry in /etc/fstab
  • Done

So that’s pretty much it.  Hopefully this is useful for everybody.

Read More