ECS cluster turnup with CoreOS and Terraform

Recently I have been evaluating different container clustering tools and technologies.  It has been a fun experience thus far, the tools and community being built around Docker have come a long time since I last looked.  So for today’s post I’d like to go over ECS a little bit.

ECS is essentially the AWS version of container management.  ECS takes care of managing your Docker (container) infrastructure by handling creation, management, destruction and scheduling as well as providing API integration with other AWS services, which is really powerful.  To get ECS up and running all you need to do is create an ECS cluster, either from the AWS console or from some other AWS integration like the CLI or Terraform, then install the agent on servers that you would like ECS to schedule work on.  After setting up the agent and cluster name you are basically ready to go, start by creating a task and then create a service to start running containers on the cluster.  Some cool new features have been announced at this years re:Invent conference but I haven’t had a chance yet to look at them yet.

First impression of ECS

The best part about testing ECS by far has been how easy it is to get set up and running.  It took less than 20 minutes to go from nothing to fully functioning cluster that was scheduling containers to hosts and receiving load.  I think the most powerful aspect of ECS is its integration with other AWS services.  For example, if you need to attach containers/services to a load balancer, the AWS infrastructure is already there so the different pieces of the infrastructure really mesh well together.

The biggest downside so far is that the ECS console interface is still clunky.  It is functional, and I have been able to use it to do everything I have needed but it just feels like it needs some polish and things are nested in menu’s and usually not easy to find.  I’m sure there are plans to improve the interface and as mentioned above some new features were recently announced, so I have a feeling there will be some nice improvements on the way.

I haven’t tried the CLI tool yet but it looks promising for automating containers and services.

Setting things up

Since I am a big fan of CoreOS I decided to try turning up my ECS cluster using CoreOS as the base OS and Terraform to do the heavy lifting and provisioning.

The first step is to create your cluster.  I noticed in the AWS console there was a configuration wizard that guides you through your first cluster which was annoying because there wasn’t a clean way to just create the cluster.  So you will need to follow the on screen instructions for getting your first environment set up.  If any of this is unclear there is a good guide for getting started with ECS here.

After your cluster has been created there is a menu that shows your ECS environments.

ECS cluster menu

 

 

 

 

 

 

 

 

 

 

Next, you will need to turn on the nodes that will be connecting to this cluster.  The first part of this is to get your cloud-config set up to connect to the cluster.  I used the CoreOS docs to set up the ECS agent, making sure to change the ECS_CLUSTER= section in the config.

#cloud-config

coreos:
  units:
  -
  name: amazon-ecs-agent.service
  command: start
  runtime: true
  content: |
  [Unit]
  Description=Amazon ECS Agent
  After=docker.service
  Requires=docker.service
  Requires=network-online.target
  After=network-online.target

  [Service]
  Environment=ECS_CLUSTER=my-cluster
  Environment=ECS_LOGLEVEL=warn
  Environment=ECS_CHECKPOINT=true
  ExecStartPre=-/usr/bin/docker kill ecs-agent
  ExecStartPre=-/usr/bin/docker rm ecs-agent
  ExecStartPre=/usr/bin/docker pull amazon/amazon-ecs-agent
  ExecStart=/usr/bin/docker run --name ecs-agent --env=ECS_CLUSTER=${ECS_CLUSTER} --env=ECS_LOGLEVEL=${ECS_LOGLEVEL} --env=ECS_CHECKPOINT=${ECS_CHECKPOINT} --publish=127.0.0.1:51678:51678 --volume=/var/run/docker.sock:/var/run/docker.sock --volume=/var/lib/aws/ecs:/data amazon/amazon-ecs-agent
  ExecStop=/usr/bin/docker stop ecs-agent

Note that the Environment=ECS_CLUSTER=my-cluster, this is the most important bit to get the server to check in to your cluster, assuming you named it “my-cluster”.  Feel free to add any other values your infrastructure may need.  Once you have the config how you want it, run it through the CoreOS cloud-config validator to make sure it checks out.  If everything looks okay there, your cloud-config should be ready to go.

You can find more info about how to configure the ECS agent in the docs here.

Once you have your cloud-config in order, you will need to get your Terraform “recipe” set up.  I used this awesome github project as the base for my own project.  The Terraform logic from there basically creates an AWS launch config and autoscaling group (and uses the cloud-config from above) to launch instances in to your cluster.  And the ECS agent takes care of the rest, once your servers are up and the agent is reporting in to the cluster.

launch_config.tf

resource "aws_launch_configuration" "ecs" {
  name = "ECS ${var.cluster_name}"
  image_id = "${var.ami}"
  instance_type = "${var.instance_type}"
  iam_instance_profile = "${var.iam_instance_profile}"
  key_name = "${var.key_name}"
  security_groups = ["${split(",", var.security_group_ids)}"]
  user_data = "${file("../cloud-config/ecs.yml")}"

  root_block_device = {
    volume_type = "gp2"
    volume_size = "40"
  }
}

Notice the user_data section.  This is where we inject the cloud config from above to provision CoreOS and launch the ECS agent.

autoscaler.tf

resource "aws_autoscaling_group" "ecs-cluster" {
  availability_zones = ["${split(",", var.availability_zones)}"]
  vpc_zone_identifier = ["${split(",", var.subnet_ids)}"]
  name = "ECS ${var.cluster_name}"
  min_size = "${var.min_size}"
  max_size = "${var.max_size}"
  desired_capacity = "${var.desired_capacity}"
  health_check_type = "EC2"
  launch_configuration = "${aws_launch_configuration.ecs.name}"
  health_check_grace_period = "${var.health_check_grace_period}"

  tag {
    key = "Env"
    value = "${var.environment_name}"
    propagate_at_launch = true
  }

  tag {
    key = "Name"
    value = "ECS ${var.cluster_name}"
    propagate_at_launch = true
  }
}

There are a few caveats I’d like to highlight with this approach.  First, I already have an AWS infrastructure in place that I was testing agains this.  So I didn’t have to do any of the extra work to create a VPC, or a gateway for the VPC.  I didn’t have to create the security groups and subnets either, I just added them to the Terraform code.

The other caveat is that if you want to use the Github project I linked to you will need to make sure that you populate the variables with your own environment specific values.  That is why having the VPC, subnets and security groups was handy for me.  Be sure to browse through the variables.tf file and substitute in your own values.  As an example,  I had to update the variables to use the CoreOS 766.4.0 image.  This AMI will be specific to your AWS region so make sure to look up the AMI first.

variable "ami" {
  /* CoreOS 766.4.0 */
  default = "ami-dbe71d9f"
  description = "AMI id to launch, must be in the region specified by the region variable"
}

Another part I had to modify to get the Github project to work was adding in my AWS credentials which look similar to the following.  Make sure to update these variables with your ID and secret.

provider "aws" {
  access_key = "${var.access_key}"
  secret_key = "${var.secret_key}"
  region = "${var.region}"
}

variable "access_key" {
  description = "AWS access key"
  default = "XXX"
}

variable "secret_key" {
  description = "AWS secret access key"
  default = "xxx"
}

Make sure to also copy/edit the autoscaling.tf and launch_config.tf files to reflect anything that is specific to your environment (Terraform will complain if there are issues).

After you have combed through the variables.tf and updated the Terraform files to your liking you can simply run terraform plan -input=false and see how Terraform will create the ASG for you.

If everything looks good, you can run terrafrom apply -input=false and Terraform will go out and start building your new ECS infrastructure for you.  After a few minutes check the EC2 console and your launch config and autoscaling group should be in there.  If that stuff all looks okay, check the ECS console and your new servers should show up and be ready to go to work for you!

NOTE: If you are starting from scratch, it is possible to do all of the infrastructure provisioning via Terraform but it is too far out of the scope of this post to cover because there are a lot of steps to it.

Read More

hello

DevOps Conferences

I did a post quite awhile ago that highlighted some of the cooler system admin and operations oriented conferences that I had on my radar at that time.  Since then I have changed jobs and am now currently in a DevOps oriented position, so I’d like to revisit the subject and update that list to reflect some of the cool conferences that are in the DevOps space.

I’d like to start off by saying first that even if you can’t make it to the bigger conferences, local groups and meet ups are also an excellent way to get out and meet other professionals that do what you do. Local groups are also an excellent way to stay in the loop on what’s current and also learn about what others are doing.  If you are interested in eventually becoming a presenter or speaker, local meet ups and groups can be a great way to get started.  There are numerous opportunities and communities (especially in bigger cities), check here for information or to see if there is a DevOps meet up near you.  If there is nothing near by, start one!  If you can’t find any DevOps groups look for Linux groups or developer groups and network from there, DevOps is beginning to become popular in broader circles.

After you get your feet wet with meet ups, the next place to start looking is conferences that sound like they might be interesting to you.  There are about a million different opportunities to choose from, from security conferences, developer conferences, server and network conferences, all the way down the line.  I am sticking with strictly DevOps related conferences because that is currently what I am interested and know the best.

Feel free to comment if I missed any conferences that you think should be on this list.

DevOps Days (Multiple dates)

Perhaps the most DevOps centric of all the conference list.  These conferences are a great way to meet with fellow DevOps professionals and network with them.  The space and industry is changing constantly and being on top of all of the changes is crucial to being successful.  Another nice thing about the DevOps days is that they are spread out around the country (and world) and spread out throughout the year so they are very accessible.  WARNING:  DevOps days are not tied to any one set of DevOps tools but rather the principles and techniques and how to apply them to different environments.  If you are looking for super in depth technical talks, this one may not be for you.

ChefConf (March)

The main Chef conference.  There are large conferences for the main configuration management tools but I chose to highlight Chef because that’s what we use at my job.  There are lots of good talks that have a Chef centered theme but also are great because the practices can be applied with other tools.  For example, there are many DevOps themes at ChefConf including continuous integration and deployment topics, how to scale environments, tying different tools together and just general configuration management techniques.  Highly recommend for Chef users, feel free to substitute the other big configuration management tool conferences here if Chef isn’t your cup of tea (Salt, Puppet, Ansible).

CoreOS Fest (May)

  • 2015 videos haven’t been posted yet

Admittedly, this is a much smaller and niche conference but is still awesome.  The conference is the first one put on by the folks at CoreOS and was designed to help the community keep up with what is going on in the CoreOS and container world.  The venue is pretty small but the content at this years conference was very good.  There were some epic announcements and talks at this years conference, including Tectonic announcements and Kubernetes deep dives, so if container technology is something you’re interested in then this conference would definitely be worth checking out.

Velocity (May)

This one just popped up on my DevOps conference radar.  I have been hearing good things about this conference for awhile now but have not had the opportunity to go to it.  It always has interesting speakers and topics and a number of the DevOps thought leaders show up for this event.  One cool thing about this conference is that there are a variety of different topics at any one time so it offers a nice, wide spectrum of information.  For example, there are technical tracks covering different areas of DevOps.

DockerCon (June)

Docker has been growing at a crazy pace so this seems like the big conference to go check out if you are in the container space.  This conference is similar to CoreOS fest but focuses more heavily on topics of Docker (obviously).  I haven’t had a chance to go to one of these yet but containers and Docker have so much momentum it is very difficult to avoid.  As well, many people believe that container technologies are going to be the path to the future so it is a good idea to be as close to the action as you can.

Monitorama (June)

This is one of the coolest conferences I think, but that is probably just because I am so obsessed with monitoring and metrics collection.  Monitoring seems to be one of those topics that isn’t always fun to deal with or work around but talks and technologies at this conference actually make me excited about monitoring.  To most, monitoring is a necessary evil and a lot of the content from this conference can help make your life easier and better in all aspects of monitoring, from new trends and tools to topics on how to correctly monitor and scale infrastructures.  Talks can be technical but well worth it, if monitoring is something that interests you.

AWS Re:Invent (November)

This one is a monster.  This is the big conference that AWS puts on every year to announce new products and technologies that they have been working on as well as provide some incredibly helpful technical talks.  I believe this conference is one of the pricier and more exclusive conferences but offers a lot in the way of content and details.  This conference offers some of the best, most technical topics of discussion that I have seen and has been invaluable as a learning resource.  All of the videos from the conference are posted on YouTube so you can get access to this information for free.  Obviously the content is related to AWS but I have found this to be a great way to learn.

Conclusion

Even if you don’t have a lot of time to travel or get out to these conferences, nearly all of them post video from the event so you can watch it whenever you want to.  This is an INCREDIBLE learning tool and resource that is FREE.  The only downside to the videos is that you can’t ask any questions, but it is easy to find the presenters contact info if you are interested and feel like reaching out.

That being said, you tend to get a lot more out of attending the conference.  The main benefit of going to conferences over watching the videos alone is that you get to meet and talk to others in the space and get a feel for what everybody else is doing as well as check out many cool tools that you might otherwise never hear about.  At every conference I attend, I always learn about some new tech that others are using that I have never heard of that is incredibly useful and I always run in to interesting people that I would otherwise not have the opportunity to meet.

So definitely if you can, get out to these conferences, meet and talk to people, and get as much out of them as you can.  If you can’t make it, check out the videos afterwards for some really great nuggets of information, they are a great way to keep your skills sharp and current.

If you have any more conferences to add to this list I would be happy to update it!  I am always looking for new conferences and DevOps related events.

Read More

Grafana dashboard

Composing a Graphite server with Docker

Recently our Graphite server needed to be overhauled, which I was not looking forward to.  Luckily Docker makes the process of building identical and reproducible images for configuring a new server much easier and painless than other methods.

Introduction

If you don’t know what Graphite is you can check out the documentation for more info.  Basically it is a tool to collect and aggregate metrics of pretty much any kind, in to a central location.  It is a great complement to something like statsd for metric collection and aggregation, which I will go over later.

The setup I will be describing today leverages a handful of components to work.  The first and most important part is Graphite.  This includes all of the parts that make up Graphite, including the carbon aggregator and carbon cache for the collection and processing of metrics as well as the whisper db for storing metrics.

There are several other alternative backends but I don’t have any experience with them so won’t be posting any details.  If you are interested, InfluxDB and OpenTSDB both look like interesting alternative backends to whisper for storing metrics.

The Problem

Graphite is known to be notoriously difficult to install and configure properly.  If you haven’t tried to set up Graphite before, give it a try.

Another argument that I hear quite a bit is that the Graphite workload doesn’t really fit in with the Docker model.  In a distributed or highly available architecture that might be the case but in the example I cove here, we are taking a different approach.

The design and implementation separates data on to an EBS volume which is a durable storage resource, so it doesn’t matter if the server were to have problems.  With our approach and process we can reprovision the server and have everything up and running in less than 5 minutes.

The benefit of doing it this way is obvious.  Another benefit of our approach is that we are levering the graphite-api package so that we can have access to all of the Graphite goodness without having to run all of the other bloats and then proxying it through ngingx/wsgi which helps with performance.  I will go over this set up in a little bit.  No Graphite server would be complete if it didn’t leverage Grafana, which turns out to be stupidly easy using the Docker approach.

If we were ever to try to expand this architecture I think a distributed model using EFS (currently in preview) along with some type of load balancer in front to distribute requests evenly may be a possibility.  If you have experience running Graphite across many nodes I would love to hear what you are doing.

The Solution

There are a few components to our architecture.  The first is a tool I have been writing about recently called Terraform.  We use this with some custom scripting to build the server, configure it and attach our Graphite data volume to the server.

Here is what a sample terraform config might look like to provision the server with the tools we want.  This server is provisioned to an AWS environment and leverages a number of variables.  You can check the docs on how variables work or if there is too much confusion I can post an example.

provider "aws" {
  access_key = "${var.access_key}"
  secret_key = "${var.secret_key}"
  region = "${var.region}"
}

resource "aws_instance" "graphite" {
  ami = "${lookup(var.amis, var.region)}"
  availability_zone = "us-east-1e"
  instance_type = "c3.xlarge"
  subnet_id = "${var.public-1e}"
  security_groups = ["${var.graphite}"]
  key_name = "XXX"
  user_data = "${file("../cloud-config/graphite.yml")}"

  root_block_device = {
    volume_type = "gp2"
    volume_size = "20"
  }

  connection {
    user = "username"
    key_file = "${var.key_path}"
  }

 # mount EBS
  provisioner "local-exec" {
     command = "aws ec2 attach-volume --region=us-east-1 --volume-id=${var.graphite_data_vol} --instance-id=${aws_instance.graphite.id} --device=/dev/xvdf"
  }

  provisioner "remote-exec" {
    inline = [
    "while [ ! -e /dev/xvdf ]; do sleep 1; done",
    "echo '/dev/xvdf /data ext4 defaults 0 0' | sudo tee -a /etc/fstab",
    "sudo mkdir /data && sudo mount -t ext4 /dev/xvdf /data"
  ]
 }

}

And optionally if you have an Elastic IP to use you can tack that on to your config

resource "aws_eip" "graphite" {
  instance = "${aws_instance.graphite.id}"
  vpc = true
}

The graphite server uses a mostly standard config and installs a few of the components that we need to run the server, docker, python, pip, docker-compose, etc.  Here is what a sample cloud config for the Graphite server might look like.

#cloud-config

# Make sure OS is up to date
apt_update: true
apt_upgrade: true
disable_root: true

# Connect to private repo
write_files:
 - path: /home/<user>/.dockercfg
 owner: user:group
 permissions: 0755
 content: |
 {
   "https://index.docker.io/v1/": {
   "auth": "XXX",
   "email": "email"
 }
 }

# Capture all subprocess output for troubleshooting cloud-init issues
output: {all: '| tee -a /var/log/cloud-init-output.log'}

packages:
 - python-dev
 - python-pip

# Install latest Docker version
runcmd:
 - apt-get -y install linux-image-extra-$(uname -r)
 - curl -sSL https://get.docker.com/ubuntu/ | sudo sh
 - usermod -a -G docker <user>
 - sg docker
 - sudo pip install -U docker-compose

# Reboot for changes to take
power_state:
 mode: reboot
 delay: "+1"

ssh_authorized_keys:
 - <put your ssh public key here>

Docker

This is where most of the magi happens.  As noted above, we are using Docker and a few of its tools to get everything working.  All the logic to get Graphite running is contained in the Dockerfile, which will require some customizing but is similar to the following.

# Building from Ubuntu base
FROM ubuntu:14.04.2

# This suppresses a bunch of annoying warnings from debconf
ENV DEBIAN_FRONTEND noninteractive

# Install all system dependencies
RUN \
 apt-get -qq install -y software-properties-common && \
 add-apt-repository -y ppa:chris-lea/node.js && \
 apt-get -qq update -y && \
 apt-get -qq install -y build-essential curl \
 # Graphite dependencies
 python-dev libcairo2-dev libffi-dev python-pip \
 # Supervisor
 supervisor \
 # nginx + uWSGI
 nginx uwsgi-plugin-python \
 # StatsD
 nodejs

# Install StatsD
RUN \
 mkdir -p /opt && \
 cd /opt && \
 curl -sLo statsd.tar.gz https://github.com/etsy/statsd/archive/v0.7.2.tar.gz && \
 tar -xzf statsd.tar.gz && \
 mv statsd-0.7.2 statsd

# Install Python packages for Graphite
RUN pip install graphite-api[sentry] whisper carbon

# Optional install graphite-api caching
# http://graphite-api.readthedocs.org/en/latest/installation.html#extra-dependencies
# RUN pip install -y graphite-api[cache]

# Configuration
# Graphite configs
ADD carbon.conf /opt/graphite/conf/carbon.conf
ADD storage-schemas.conf /opt/graphite/conf/storage-schemas.conf
ADD storage-aggregation.conf /opt/graphite/conf/storage-aggregation.conf
# Supervisord
ADD supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# StatsD
ADD statsd_config.js /etc/statsd/config.js
# Graphite API
ADD graphite-api.yaml /etc/graphite-api.yaml
# uwsgi
ADD uwsgi.conf /etc/uwsgi.conf
# nginx
ADD nginx.conf /etc/nginx/nginx.conf
ADD basic_auth /etc/nginx/basic_auth

# nginx
EXPOSE 80 \
# graphite-api
8080 \
# Carbon line receiver
2003 \
# Carbon pickle receiver
2004 \
# Carbon cache query
7002 \
# StatsD UDP
8125 \
# StatsD Admin
8126

# Launch stack
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]

The other component we need is Grafana, which we don’t actually build but pull from the Dockerhub registry and inject our custom volume to.  This is all captured in our docker-compose.yml file listed below.

graphite:
  build: ./docker-graphite
  restart: always
  ports:
    - "8080:80"
    - "8125:8125/udp"
    - "8126:8126"
    - "2003:2003"
    - "2004:2004"
  volumes:
    - "/data/graphite:/opt/graphite/storage/whisper"

grafana:
  image: grafana/grafana
  restart: always
  ports:
    - "80:3000"
  volumes:
    - "/data/grafana:/var/lib/grafana"
  links:
    - graphite
  environment:
    - GF_SECURITY_ADMIN_PASSWORD=password123

We have open sourced our configuration and placed it on github so you can take a look at it to get a better idea of the configs and how everything is working with some working examples.  The github repo is a quick way to try out the stack without having to provision and build an environment to run this on.  If you are just interested in kicking the tires I suggest starting with the github repo.

The build directive above corresponds to the repo on github.

The last components is actually running the Docker containers.  As you can see we use docker-compose but we also need a way to start the containers automatically after a disruption like a reboot or something.  That is actually pretty easy.  On an Ubuntu (or system using upstart) you can create an init script to start up docker-compose or restart it automatically if it has problems.  Here I have created a file called /etc/init/graphite.conf with the following configuraiton.

description "Graphite"
start on filesystem and started docker
stop on runlevel [!2345]
respawn
chdir /home/user
exec docker-compose up

A systemd service would achieve a similar goal but the version of Ubuntu used here doesn’t leverage systemd.

After everything has been dropped in place and configured you can check your work by testing out Grafana by hitting the public IP address of your server.  If you hit the Grafana splash page everything should be working!

Grafana dashboard

 

 

 

 

 

 

 

 

 

 

Conclusion

There are many pieces to this puzzle and honestly we don’t have the requirement of having Graphite be 100% available and redundant so we can get away with a single server for our needs.  A separate EBS volume and Terraform allow us to rebuild the server quickly and automatically if something were to happen to the server.  Also, the way we have designed Graphite to run will be able to handle a substantial workload without falling over.  But if you are doing anything cool with Graphite HA or resiliency I would like to hear how you are doing it, there is always room for improvement.

If you are just interested in trying out the Graphite stack I highly suggest going over to the github repo and running the container stack to play around with the components, especially if you are interested in learning about how statsd and graphite collect metrics.  The Grafana interface give you a nice way to tap in to the metrics that get pumped in to Graphite.

Read More

log

Shipping logs to ELK

Following along in the progression of this little mini series about getting the ELK stack working on Docker, we are almost finished.  The last step after getting the ELK stack up and running (part 1) and optimizing LS and ES (part 2) is to get the logs flowing in to the ELK server.

There are a few options (actually there are a lot) for getting your logs in to Logstash and Elasticsearch.  I will be focusing on the two log shippers I found to be the most powerful and flexible for this task.  There are a variety of other options for jamming logs in to LS but for my intents and purposes they either don’t fit in with my workflow or just weren’t supported well enough.

For more info you can check various different inputs here.

Other notable projects that aren’t mentioned here are the Logstash agent, which requires the entire LS project, it is just the logging agent component.  This is a heavyweight solution but is good for testing locally.

There is also the beaver project for logging over a TCP socket, which is nice if you are either logging internally only or using a broker like Redis or Kafka.  Obviously not a great option if security of log transmission is important to you.  This would be a great solution if you are collecting the logs over a public internet connection.

logstash-forwarder

The first log shipper I started with, creatively entitled “logstash-forwarder” was created by the author of Logstash and is written in Go, so it is super fast and has a very small footprint.  Another benefit of this logging method is that connections to the LS server are wrapped in TLS so the logging agent solves the problem that straight TCP collectors have by securing the data.

There are great instructions for getting up and going on the project github page, there are even instructions for creating a Debian/RPM package out of the Go binary for an easy way to distribute the shipper.  If you plan on shipping the logs via a Docker container I would suggest looking through the docs on the github page for how to build the Debian package

The recently released version 0.4.0 was an attractive option because it added the ability to tail logs so that the LSF wouldn’t try to forwarder an entire log file if the “pipe” to the LS server got broken or the agent somehow died and needed to be restarted.  Prior to the 0.4.0 release these issues could potentially bog down or crash the LS server, record logs out of order or potentially create duplicates.

To run logstash-forwarder with the appropriate tailing flag turned on use this command.

/opt/logstash-forwarder/bin/logstash-forwarder -tail -config /etc/logstash-forwarder

A couple things to note.  The /opt/logstash-forwarder/bin/logstash-forwarder part is where the binary was installed to.  The -tail flag will tell LSF to tail the log.  The -config flag specifies where the LSF client should go look for a configuration to load.

The configuration can be as simple (or complicated) as you want.  It basically just needs a cert to communicate with the Logstash server.

{
 "network": {
   "servers": [ "<server>:<port>" ],
   "ssl certificate": "/opt/certs/logstash.crt",
   "ssl key": "/opt/certs/logstash.key",
   "ssl ca": "/opt/certs/logstash.crt",
   "timeout": 15
 },

 "files": [
 {
   "paths": [ "/var/log/*.log" ],
   "fields": { "type": "syslog" }
 }
 ]
}

By default, the LSF client can be somewhat noisy in its stdout logging (especially for a Docker container) so we can turn down the info logging so that only errors and alerts are logged.

/opt/logstash-forwarder/bin/logstash-forwarder -quiet -tail -config /etc/logstash-forwarder

There are more options of course if you are interested and you can list them out by running the binary with no additional options passed in.  But for my use case, quiet and tail were all I needed.

Since the theme of this mini series is how to get everything running in Docker, I will show you what a logstash-forwarder Docker image looks like here.  The Dockerfile for creating the logstash-forwarder image is pretty straight forward.  I have chosen to install a few extra tools in to the container that help with troubleshooting should there ever be an issue with the client running inside the container.

We also inject the deb package in to the container as well as the certs.

FROM debian:wheezy

ENV DEBIAN_FRONTEND noninteractive

# Install
RUN apt-get update && apt-get install -y -qq vim curl netcat
ADD logstash-forwarder_0.4.0_amd64.deb /tmp/
RUN dpkg -i /tmp/logstash-forwarder_0.4.0_amd64.deb

# Config
RUN mkdir -p /opt/certs/
ADD local.conf /etc/logstash-forwarder
ADD logstash-forwarder.crt /opt/certs/logstash-forwarder.crt
ADD logstash-forwarder.key /opt/certs/logstash-forwarder.key

# start lsf
CMD ["/opt/logstash-forwarder/bin/logstash-forwarder", "-quiet", "-tail", "-config", "/etc/logstash-forwarder"]

I believe there are future plans to create a logger similar to LSF but written in JRuby so it is easier to maintain and to fit more with the style of the LS project.

The last piece to get this working is the docker run command.  It will depend on your own environment but a generic run command might look like the following.  Obviously replace “<myserver>” and <org/image:tag>” with your specific information.

docker run -v /data:/data --name lsf --hostname <myserver> <org/image:tag>

Log Courier

I was having issues getting logstash-forwarder to work correctly at one point so I began to explore different options for loggers and stumbled across this awesome project.  Log Courier is like logstash-forwarder on steroids.  It is much more customizable and offers a large number of options that aren’t available in logstash-forwarder as well, such as the ability to do logs processing at the client end, which is a major major bonus over other log shippers.

The project (and its documentation) live in this github project.  The docs are very good and the maintainer is very good at responding to issues or questions so I recommend checking out the project as a reference.  Log Courier is similar to LSF in the fact that you need to build it and create a package for it, so as a prerequisite you will need to have GO installed.

Again, all of this information is on the github project and does a much better job of explaining how to get this all working.  To help alleviate some of the build issues that turn people away to this project I believe there are discussions now of creating publicly available Debian and RPM packages.

Once you have your package created and installed you can run LC as follows:

/opt/log-courier/bin/log-courier -config /etc/courier.conf

The only flag we need to pass is the -config flag.  There are a few other command line flags available but most all of the configuration for LC is done via the config file that gets passed to the client when it starts, including logging levels and other customizations.  It isn’t really mentioned here but the default behavior for LC is tail the logs so you don’t need to worry about crashing your LS server if the stream ever breaks.  LC is good at figuring out what it should do and pick up where it left off.

You can check the docs for all of the custom configurations you can pass to LC here.

Lets take a look at a what a sample configuration file might look like in LC to demonstrate some its enhanced features.

{
 "network": {
   "servers": [ "<server>:<port>" ],
   "ssl ca": "/opt/certs/courier.crt",
   "timeout": 15
 },

 "general": {
   "log level": "debug"
 },

 "files": [
 {
   "paths": [ "/data/*foo.log" ],
   "fields": { "type": "foo" }
 },
 {
   "paths": [ "/data/*bar.log" ],
   "fields": { "type": "bar" },
     "codec": {
     "name": "multiline",
     "pattern": "^%{TIMESTAMP_ISO8601} ",
     "negate": true,
     "what": "previous"
   }
 }
 ]
}

The network section is similar to LSF, you need to point the client at the correct server and you also need to tell it which cert to connect with.  Generating the cert is basically the same as it was for LSF, just use a different name.  The “general” section provides a place to set info at the global level for LC.  This configuration is also using regex expansion to do pattern matching for logs, the same way LSF does.  The most interesting part is that in this configuration we can do multiline logging at the client level which LSF does not support.  This is especially useful at taking some strain off of the server for processing and is a great reason to use LC.

And because this is another Docker example, here is the the Dockerfile.  This is very similar to the LSF Dockerfile, we are just using a different .deb file (which we created above), different certs and a different CMD to start the logger.

#FROM ubuntu:14.04
FROM debian:wheezy

ENV DEBIAN_FRONTEND noninteractive

# Install
RUN apt-get update && apt-get install -y -qq vim curl netcat
ADD log-courier_1.6_amd64.deb /tmp/
RUN dpkg -i /tmp/log-courier_1.6_amd64.deb

# Config
RUN mkdir -p /opt/certs/
ADD local.conf /etc/courier.conf
ADD courier.crt /opt/certs/courier.crt
ADD courier.key /opt/certs/courier.key

# start log courier
CMD ["/opt/log-courier/bin/log-courier", "-config", "/etc/courier.conf"]

As mentioned, I already have built the Debian package so I simply inject it in to my Docker image.  Running the Docker image is similar to LSF.

docker run -v /data:/data --name courier --hostname <myserver> <org/image:tag>

Conclusion

Some of the configurations I am using are specific to my workflow and environment but most of this can be adapted.  Running the LSF or LC clients in containers is a great way to isolate your logging client.  The reason this works so well in my scenario is because we are using the /data volume as a pattern on all of our host machines to log application specific logs to.  That makes it very easy to point the LSF and LC clients to point in the right location.  If you aren’t using any custom directories (or lots of them) you could just update your volume mounts in your docker run command to look in the specified location for logs that you expecting.

Once you have the logging workflow mastered you can start writing unit files to run these containers via systemd or fleet or injecting them in to cloud configs which makes scaling these logging containers simple.  Our environment leverages CoreOS so we write unit files in our cloud configs for the loggers which takes care of scaling this workflow.  If you aren’t using CoreOS or systemd this could probably be made to work with docker-compose but I haven’t tried it yet.

If you don’t use Docker then you can easily strip out the LSF and LC specific parts to get this working.  The main issue to work through will be creating the package for distribution and installation.  Once you have the packages you should be good to go, all of the commands and configuration being run by Docker should work the same.

Feel free to comment or let me know if you have questions.  There are a lot of moving pieces to this workflow but it becomes pretty powerful once all of the components are set up and put in place.

Read More

ELK stack

Performance tuning ELK stack

Building on my previous post describing how to quickly set up a centralized logging solution using the ElasticSearch + Logstash + Kibana (ELK) stack, we have a fully functioning, Docker based ELK stack agreggating and handling our logs.  The only problem is that performance is either waaay too slow or the stack seems to crash.  As I worked through this problem myself, I found a few settings that vastly improved the stability and performance of my ELK stack.

So in this post I will share some of the useful tweaks and changes that worked in my own environment to help squeeze additional performance out of the setup.

Hopefully these adjustments will help others!

Adjusting Logstash

Out of the box, Logstash does a pretty good job of setting things up with reasonable default values.  The main adjustment that I have found to be useful is setting the default number of Logstsash “workers” when the Logstash process starts.  A good rule of thumb would be one worker per CPU.  So if the server has 4 cpu’s, your Logstash start up command would look similar to the following.

/opt/logstash/bin/logstash --verbose -w 4 -f /etc/logstash/server.conf

The important bit to note is the “-w 4” part.  A poorly configured server.conf file may also lead to performance issues but that will be very specific to the user.  For the most part, unless the configuration contains many conditionals and expensive codec calls or excessive filtering, performance here should be stable.

If you are concerned about utilization I recommend watching cpu and memory consumption by the Logstash process, signs that there could be a configuration issue would be cpu maxing out.

The main thing to be aware of with a custom number of workers is that some codecs may not work properly because they are not thread safe.  If you are using the “multiline” codec in any of your inputs then you will not be able to leverage multiple workers, or if you are using multiple workers you won’t be able to use the codec until the thread safe problems have been fixed.  The good news is that this is a known issue and is being worked on, hopefully will be fixed by the time 1.5.0 is released.  It tripped me up initially and so I thought I would mention the issue.

Increase Java heap memory

It turns out that ElasticSearch is a bit of a memory hog once you start actually sending data through Logstash to have ES consume.  In my own testing, I discovered that logs would mysteriously stop being recorded in to ES and consequently would fail to show up in my dashboards.

The first tweak to make is to increase the amount of memory available to Java to process ES indices.  I have discovered that there is a script that ES uses to load up Java when it starts, which is passing in a value of 1GB of RAM to start.

After some digging, I discovered that the default ES configuration I was using was quickly running out of memory and was crashing because the ES heap memory couldn’t keep up with the load (mostly indexes).

Here is a sample of the errors I was seeing when ES and Logstash stopped processing logs.

message [out of memory][OutOfMemoryError[Java heap space]]

This was a good starting point for investigating.  Basically, what this means is that the ES container had a Java heap memory setting set to 1GB which was exhausting the the memory allocated to ES, even though there was much more memory available on the server.

To increase this memory limit, we will override this script with our own custom values.

This script is called “elasticsearch.sh.in” and we will be overriding the default value ES_MAX_MEM with a value of “4g” as show below.  The general guideline that has been recommended is to use a value here of about 50% of the total amount of memory.  So if your server has 8GB of memory then setting it to 4GB here will be the 50% we are looking for.

There are many other options that can be overridden but the most import value is the max memory value that we have updated.

We can inject this custom value as an environment variable in our Dockerfile which makes managing custom configurations much easier if we need to make additions or adjustments later on.

ENV ES_HEAP_SIZE=8g

I am posting the script that sets the values below as a reference in case there are other values you need to override.  Again, we can use use environmental variables to set these up in our Dockefile if needed.

#!/bin/sh

ES_CLASSPATH=$ES_CLASSPATH:$ES_HOME/lib/elasticsearch-1.5.0.jar:$ES_HOME/lib/*:$ES_HOME/lib/sigar/*

if [ "x$ES_MIN_MEM" = "x" ]; then
 ES_MIN_MEM=256m
fi
if [ "x$ES_MAX_MEM" = "x" ]; then
 ES_MAX_MEM=1g
fi
if [ "x$ES_HEAP_SIZE" != "x" ]; then
 ES_MIN_MEM=$ES_HEAP_SIZE
 ES_MAX_MEM=$ES_HEAP_SIZE
fi

# min and max heap sizes should be set to the same value to avoid
# stop-the-world GC pauses during resize, and so that we can lock the
# heap in memory on startup to prevent any of it from being swapped
# out.
JAVA_OPTS="$JAVA_OPTS -Xms${ES_MIN_MEM}"
JAVA_OPTS="$JAVA_OPTS -Xmx${ES_MAX_MEM}"

# new generation
if [ "x$ES_HEAP_NEWSIZE" != "x" ]; then
 JAVA_OPTS="$JAVA_OPTS -Xmn${ES_HEAP_NEWSIZE}"
fi

# max direct memory
if [ "x$ES_DIRECT_SIZE" != "x" ]; then
 JAVA_OPTS="$JAVA_OPTS -XX:MaxDirectMemorySize=${ES_DIRECT_SIZE}"
fi

# set to headless, just in case
JAVA_OPTS="$JAVA_OPTS -Djava.awt.headless=true"

# Force the JVM to use IPv4 stack
if [ "x$ES_USE_IPV4" != "x" ]; then
 JAVA_OPTS="$JAVA_OPTS -Djava.net.preferIPv4Stack=true"
fi

JAVA_OPTS="$JAVA_OPTS -XX:+UseParNewGC"
JAVA_OPTS="$JAVA_OPTS -XX:+UseConcMarkSweepGC"

JAVA_OPTS="$JAVA_OPTS -XX:CMSInitiatingOccupancyFraction=75"
JAVA_OPTS="$JAVA_OPTS -XX:+UseCMSInitiatingOccupancyOnly"

# GC logging options
if [ "x$ES_USE_GC_LOGGING" != "x" ]; then
 JAVA_OPTS="$JAVA_OPTS -XX:+PrintGCDetails"
 JAVA_OPTS="$JAVA_OPTS -XX:+PrintGCTimeStamps"
 JAVA_OPTS="$JAVA_OPTS -XX:+PrintClassHistogram"
 JAVA_OPTS="$JAVA_OPTS -XX:+PrintTenuringDistribution"
 JAVA_OPTS="$JAVA_OPTS -XX:+PrintGCApplicationStoppedTime"
 JAVA_OPTS="$JAVA_OPTS -Xloggc:/var/log/elasticsearch/gc.log"
fi

# Causes the JVM to dump its heap on OutOfMemory.
JAVA_OPTS="$JAVA_OPTS -XX:+HeapDumpOnOutOfMemoryError"
# The path to the heap dump location, note directory must exists and have enough
# space for a full heap dump.
#JAVA_OPTS="$JAVA_OPTS -XX:HeapDumpPath=$ES_HOME/logs/heapdump.hprof"

# Disables explicit GC
JAVA_OPTS="$JAVA_OPTS -XX:+DisableExplicitGC"

# Ensure UTF-8 encoding by default (e.g. filenames)
JAVA_OPTS="$JAVA_OPTS -Dfile.encoding=UTF-8"

Then when Elasticsearch starts up it will look for this custom configuration script and start Java with the desired 4GB of memory.  This is one easy way to squeeze performance out of your server without making any other changes or modifying your server.

Modify Elasticsearch configuration

This one is also very easy to put in to place.  We are already using the elasticsearch.yml so the only thing that needs to be done is to create some additional configurations to this file, rebuild the container, and restart the ES container with the updated values.

A good setting to configure to help control ES memory usage is to set the indices field cache size.  Limiting this indices cache size makes sense because you rarely need to retrieve logs that are older than a few days.  By default ES will hold old indices in memory and will never let them go.  So unless you have unlimited memory than it makes sense to limit the memory in this scenario.

To limit the cache size simply add the following value anywhere in your custom elasticsearch.yml configuration file.  This setting and adjusting the Java heap memory size should be enough to get started but there are a few other things that might be worth checking.

indices.fielddata.cache.size:  40%

If you only make one change, add this line to your ES configuration!  This setting will let go of the oldest indices first so you won’t be dropping new indices, 9/10 times this is probably what you want when accessing data in Logstash.  More information about controlling memory usage can be found here.

Another idea worth looking at for an easy performance boost would be disabling swap if it has been enabled.  Again, in most cloud environment and images swap is turned off, but it is always a setting worth checking.

To bypass the OS swap setting you can simply configure a no swap value in ES by adding the following to your elasticsearch.yml configurtion file.

bootstrap.mlockall: true

To check that this has value has been configured properly you can run this command.

curl http://localhost:9200/_nodes/process?pretty

This may cause memory warnings when ES starts up (eg, nuable to lock JVM memory (ENOMEM). This can result in part of the JVM being swapped out. Increase RLIMIT_MEMLOCK (ulimit).) but you should be able to ignore these warnings.  If you are concerned, turn these limits off at the OS level which is demonstrated below.

Misc

Other low hanging fruit includes disabling open file limits on the OS.  ES can run in to problems if there is a cap on the amount of files that its processes can open or have open at a time.  I have run in to open file limit issues before and they are never fun to deal with.  This shouldn’t be an issue if you are running ES in a Docker container with the Ubuntu 14.04 base image.

If you aren’t sure about the open file limits for ES you can run the following command to get a better idea of the current limits.

ulimit -n

Make sure both the soft and hard limits are either set to unlimited or to an extremely high number.

This should take care of most if not all of the stability issues.  After putting these changes in place in my own environment I went from multiple crashes per day to so far none in over a week.  If you are still experiencing issues you might want to take a look at the ES production deployment guide for help or the #logstash and #elasticsearch IRC channels on freenode.

Read More