log

Shipping logs to ELK

Following along in the progression of this little mini series about getting the ELK stack working on Docker, we are almost finished.  The last step after getting the ELK stack up and running (part 1) and optimizing LS and ES (part 2) is to get the logs flowing in to the ELK server.

There are a few options (actually there are a lot) for getting your logs in to Logstash and Elasticsearch.  I will be focusing on the two log shippers I found to be the most powerful and flexible for this task.  There are a variety of other options for jamming logs in to LS but for my intents and purposes they either don’t fit in with my workflow or just weren’t supported well enough.

For more info you can check various different inputs here.

Other notable projects that aren’t mentioned here are the Logstash agent, which requires the entire LS project, it is just the logging agent component.  This is a heavyweight solution but is good for testing locally.

There is also the beaver project for logging over a TCP socket, which is nice if you are either logging internally only or using a broker like Redis or Kafka.  Obviously not a great option if security of log transmission is important to you.  This would be a great solution if you are collecting the logs over a public internet connection.

logstash-forwarder

The first log shipper I started with, creatively entitled “logstash-forwarder” was created by the author of Logstash and is written in Go, so it is super fast and has a very small footprint.  Another benefit of this logging method is that connections to the LS server are wrapped in TLS so the logging agent solves the problem that straight TCP collectors have by securing the data.

There are great instructions for getting up and going on the project github page, there are even instructions for creating a Debian/RPM package out of the Go binary for an easy way to distribute the shipper.  If you plan on shipping the logs via a Docker container I would suggest looking through the docs on the github page for how to build the Debian package

The recently released version 0.4.0 was an attractive option because it added the ability to tail logs so that the LSF wouldn’t try to forwarder an entire log file if the “pipe” to the LS server got broken or the agent somehow died and needed to be restarted.  Prior to the 0.4.0 release these issues could potentially bog down or crash the LS server, record logs out of order or potentially create duplicates.

To run logstash-forwarder with the appropriate tailing flag turned on use this command.

/opt/logstash-forwarder/bin/logstash-forwarder -tail -config /etc/logstash-forwarder

A couple things to note.  The /opt/logstash-forwarder/bin/logstash-forwarder part is where the binary was installed to.  The -tail flag will tell LSF to tail the log.  The -config flag specifies where the LSF client should go look for a configuration to load.

The configuration can be as simple (or complicated) as you want.  It basically just needs a cert to communicate with the Logstash server.

{
 "network": {
   "servers": [ "<server>:<port>" ],
   "ssl certificate": "/opt/certs/logstash.crt",
   "ssl key": "/opt/certs/logstash.key",
   "ssl ca": "/opt/certs/logstash.crt",
   "timeout": 15
 },

 "files": [
 {
   "paths": [ "/var/log/*.log" ],
   "fields": { "type": "syslog" }
 }
 ]
}

By default, the LSF client can be somewhat noisy in its stdout logging (especially for a Docker container) so we can turn down the info logging so that only errors and alerts are logged.

/opt/logstash-forwarder/bin/logstash-forwarder -quiet -tail -config /etc/logstash-forwarder

There are more options of course if you are interested and you can list them out by running the binary with no additional options passed in.  But for my use case, quiet and tail were all I needed.

Since the theme of this mini series is how to get everything running in Docker, I will show you what a logstash-forwarder Docker image looks like here.  The Dockerfile for creating the logstash-forwarder image is pretty straight forward.  I have chosen to install a few extra tools in to the container that help with troubleshooting should there ever be an issue with the client running inside the container.

We also inject the deb package in to the container as well as the certs.

FROM debian:wheezy

ENV DEBIAN_FRONTEND noninteractive

# Install
RUN apt-get update && apt-get install -y -qq vim curl netcat
ADD logstash-forwarder_0.4.0_amd64.deb /tmp/
RUN dpkg -i /tmp/logstash-forwarder_0.4.0_amd64.deb

# Config
RUN mkdir -p /opt/certs/
ADD local.conf /etc/logstash-forwarder
ADD logstash-forwarder.crt /opt/certs/logstash-forwarder.crt
ADD logstash-forwarder.key /opt/certs/logstash-forwarder.key

# start lsf
CMD ["/opt/logstash-forwarder/bin/logstash-forwarder", "-quiet", "-tail", "-config", "/etc/logstash-forwarder"]

I believe there are future plans to create a logger similar to LSF but written in JRuby so it is easier to maintain and to fit more with the style of the LS project.

The last piece to get this working is the docker run command.  It will depend on your own environment but a generic run command might look like the following.  Obviously replace “<myserver>” and <org/image:tag>” with your specific information.

docker run -v /data:/data --name lsf --hostname <myserver> <org/image:tag>

Log Courier

I was having issues getting logstash-forwarder to work correctly at one point so I began to explore different options for loggers and stumbled across this awesome project.  Log Courier is like logstash-forwarder on steroids.  It is much more customizable and offers a large number of options that aren’t available in logstash-forwarder as well, such as the ability to do logs processing at the client end, which is a major major bonus over other log shippers.

The project (and its documentation) live in this github project.  The docs are very good and the maintainer is very good at responding to issues or questions so I recommend checking out the project as a reference.  Log Courier is similar to LSF in the fact that you need to build it and create a package for it, so as a prerequisite you will need to have GO installed.

Again, all of this information is on the github project and does a much better job of explaining how to get this all working.  To help alleviate some of the build issues that turn people away to this project I believe there are discussions now of creating publicly available Debian and RPM packages.

Once you have your package created and installed you can run LC as follows:

/opt/log-courier/bin/log-courier -config /etc/courier.conf

The only flag we need to pass is the -config flag.  There are a few other command line flags available but most all of the configuration for LC is done via the config file that gets passed to the client when it starts, including logging levels and other customizations.  It isn’t really mentioned here but the default behavior for LC is tail the logs so you don’t need to worry about crashing your LS server if the stream ever breaks.  LC is good at figuring out what it should do and pick up where it left off.

You can check the docs for all of the custom configurations you can pass to LC here.

Lets take a look at a what a sample configuration file might look like in LC to demonstrate some its enhanced features.

{
 "network": {
   "servers": [ "<server>:<port>" ],
   "ssl ca": "/opt/certs/courier.crt",
   "timeout": 15
 },

 "general": {
   "log level": "debug"
 },

 "files": [
 {
   "paths": [ "/data/*foo.log" ],
   "fields": { "type": "foo" }
 },
 {
   "paths": [ "/data/*bar.log" ],
   "fields": { "type": "bar" },
     "codec": {
     "name": "multiline",
     "pattern": "^%{TIMESTAMP_ISO8601} ",
     "negate": true,
     "what": "previous"
   }
 }
 ]
}

The network section is similar to LSF, you need to point the client at the correct server and you also need to tell it which cert to connect with.  Generating the cert is basically the same as it was for LSF, just use a different name.  The “general” section provides a place to set info at the global level for LC.  This configuration is also using regex expansion to do pattern matching for logs, the same way LSF does.  The most interesting part is that in this configuration we can do multiline logging at the client level which LSF does not support.  This is especially useful at taking some strain off of the server for processing and is a great reason to use LC.

And because this is another Docker example, here is the the Dockerfile.  This is very similar to the LSF Dockerfile, we are just using a different .deb file (which we created above), different certs and a different CMD to start the logger.

#FROM ubuntu:14.04
FROM debian:wheezy

ENV DEBIAN_FRONTEND noninteractive

# Install
RUN apt-get update && apt-get install -y -qq vim curl netcat
ADD log-courier_1.6_amd64.deb /tmp/
RUN dpkg -i /tmp/log-courier_1.6_amd64.deb

# Config
RUN mkdir -p /opt/certs/
ADD local.conf /etc/courier.conf
ADD courier.crt /opt/certs/courier.crt
ADD courier.key /opt/certs/courier.key

# start log courier
CMD ["/opt/log-courier/bin/log-courier", "-config", "/etc/courier.conf"]

As mentioned, I already have built the Debian package so I simply inject it in to my Docker image.  Running the Docker image is similar to LSF.

docker run -v /data:/data --name courier --hostname <myserver> <org/image:tag>

Conclusion

Some of the configurations I am using are specific to my workflow and environment but most of this can be adapted.  Running the LSF or LC clients in containers is a great way to isolate your logging client.  The reason this works so well in my scenario is because we are using the /data volume as a pattern on all of our host machines to log application specific logs to.  That makes it very easy to point the LSF and LC clients to point in the right location.  If you aren’t using any custom directories (or lots of them) you could just update your volume mounts in your docker run command to look in the specified location for logs that you expecting.

Once you have the logging workflow mastered you can start writing unit files to run these containers via systemd or fleet or injecting them in to cloud configs which makes scaling these logging containers simple.  Our environment leverages CoreOS so we write unit files in our cloud configs for the loggers which takes care of scaling this workflow.  If you aren’t using CoreOS or systemd this could probably be made to work with docker-compose but I haven’t tried it yet.

If you don’t use Docker then you can easily strip out the LSF and LC specific parts to get this working.  The main issue to work through will be creating the package for distribution and installation.  Once you have the packages you should be good to go, all of the commands and configuration being run by Docker should work the same.

Feel free to comment or let me know if you have questions.  There are a lot of moving pieces to this workflow but it becomes pretty powerful once all of the components are set up and put in place.

Read More

ELK stack

Performance tuning ELK stack

Building on my previous post describing how to quickly set up a centralized logging solution using the ElasticSearch + Logstash + Kibana (ELK) stack, we have a fully functioning, Docker based ELK stack agreggating and handling our logs.  The only problem is that performance is either waaay too slow or the stack seems to crash.  As I worked through this problem myself, I found a few settings that vastly improved the stability and performance of my ELK stack.

So in this post I will share some of the useful tweaks and changes that worked in my own environment to help squeeze additional performance out of the setup.

Hopefully these adjustments will help others!

Adjusting Logstash

Out of the box, Logstash does a pretty good job of setting things up with reasonable default values.  The main adjustment that I have found to be useful is setting the default number of Logstsash “workers” when the Logstash process starts.  A good rule of thumb would be one worker per CPU.  So if the server has 4 cpu’s, your Logstash start up command would look similar to the following.

/opt/logstash/bin/logstash --verbose -w 4 -f /etc/logstash/server.conf

The important bit to note is the “-w 4” part.  A poorly configured server.conf file may also lead to performance issues but that will be very specific to the user.  For the most part, unless the configuration contains many conditionals and expensive codec calls or excessive filtering, performance here should be stable.

If you are concerned about utilization I recommend watching cpu and memory consumption by the Logstash process, signs that there could be a configuration issue would be cpu maxing out.

The main thing to be aware of with a custom number of workers is that some codecs may not work properly because they are not thread safe.  If you are using the “multiline” codec in any of your inputs then you will not be able to leverage multiple workers, or if you are using multiple workers you won’t be able to use the codec until the thread safe problems have been fixed.  The good news is that this is a known issue and is being worked on, hopefully will be fixed by the time 1.5.0 is released.  It tripped me up initially and so I thought I would mention the issue.

Increase Java heap memory

It turns out that ElasticSearch is a bit of a memory hog once you start actually sending data through Logstash to have ES consume.  In my own testing, I discovered that logs would mysteriously stop being recorded in to ES and consequently would fail to show up in my dashboards.

The first tweak to make is to increase the amount of memory available to Java to process ES indices.  I have discovered that there is a script that ES uses to load up Java when it starts, which is passing in a value of 1GB of RAM to start.

After some digging, I discovered that the default ES configuration I was using was quickly running out of memory and was crashing because the ES heap memory couldn’t keep up with the load (mostly indexes).

Here is a sample of the errors I was seeing when ES and Logstash stopped processing logs.

message [out of memory][OutOfMemoryError[Java heap space]]

This was a good starting point for investigating.  Basically, what this means is that the ES container had a Java heap memory setting set to 1GB which was exhausting the the memory allocated to ES, even though there was much more memory available on the server.

To increase this memory limit, we will override this script with our own custom values.

This script is called “elasticsearch.sh.in” and we will be overriding the default value ES_MAX_MEM with a value of “4g” as show below.  The general guideline that has been recommended is to use a value here of about 50% of the total amount of memory.  So if your server has 8GB of memory then setting it to 4GB here will be the 50% we are looking for.

There are many other options that can be overridden but the most import value is the max memory value that we have updated.

We can inject this custom value as an environment variable in our Dockerfile which makes managing custom configurations much easier if we need to make additions or adjustments later on.

ENV ES_HEAP_SIZE=8g

I am posting the script that sets the values below as a reference in case there are other values you need to override.  Again, we can use use environmental variables to set these up in our Dockefile if needed.

#!/bin/sh

ES_CLASSPATH=$ES_CLASSPATH:$ES_HOME/lib/elasticsearch-1.5.0.jar:$ES_HOME/lib/*:$ES_HOME/lib/sigar/*

if [ "x$ES_MIN_MEM" = "x" ]; then
 ES_MIN_MEM=256m
fi
if [ "x$ES_MAX_MEM" = "x" ]; then
 ES_MAX_MEM=1g
fi
if [ "x$ES_HEAP_SIZE" != "x" ]; then
 ES_MIN_MEM=$ES_HEAP_SIZE
 ES_MAX_MEM=$ES_HEAP_SIZE
fi

# min and max heap sizes should be set to the same value to avoid
# stop-the-world GC pauses during resize, and so that we can lock the
# heap in memory on startup to prevent any of it from being swapped
# out.
JAVA_OPTS="$JAVA_OPTS -Xms${ES_MIN_MEM}"
JAVA_OPTS="$JAVA_OPTS -Xmx${ES_MAX_MEM}"

# new generation
if [ "x$ES_HEAP_NEWSIZE" != "x" ]; then
 JAVA_OPTS="$JAVA_OPTS -Xmn${ES_HEAP_NEWSIZE}"
fi

# max direct memory
if [ "x$ES_DIRECT_SIZE" != "x" ]; then
 JAVA_OPTS="$JAVA_OPTS -XX:MaxDirectMemorySize=${ES_DIRECT_SIZE}"
fi

# set to headless, just in case
JAVA_OPTS="$JAVA_OPTS -Djava.awt.headless=true"

# Force the JVM to use IPv4 stack
if [ "x$ES_USE_IPV4" != "x" ]; then
 JAVA_OPTS="$JAVA_OPTS -Djava.net.preferIPv4Stack=true"
fi

JAVA_OPTS="$JAVA_OPTS -XX:+UseParNewGC"
JAVA_OPTS="$JAVA_OPTS -XX:+UseConcMarkSweepGC"

JAVA_OPTS="$JAVA_OPTS -XX:CMSInitiatingOccupancyFraction=75"
JAVA_OPTS="$JAVA_OPTS -XX:+UseCMSInitiatingOccupancyOnly"

# GC logging options
if [ "x$ES_USE_GC_LOGGING" != "x" ]; then
 JAVA_OPTS="$JAVA_OPTS -XX:+PrintGCDetails"
 JAVA_OPTS="$JAVA_OPTS -XX:+PrintGCTimeStamps"
 JAVA_OPTS="$JAVA_OPTS -XX:+PrintClassHistogram"
 JAVA_OPTS="$JAVA_OPTS -XX:+PrintTenuringDistribution"
 JAVA_OPTS="$JAVA_OPTS -XX:+PrintGCApplicationStoppedTime"
 JAVA_OPTS="$JAVA_OPTS -Xloggc:/var/log/elasticsearch/gc.log"
fi

# Causes the JVM to dump its heap on OutOfMemory.
JAVA_OPTS="$JAVA_OPTS -XX:+HeapDumpOnOutOfMemoryError"
# The path to the heap dump location, note directory must exists and have enough
# space for a full heap dump.
#JAVA_OPTS="$JAVA_OPTS -XX:HeapDumpPath=$ES_HOME/logs/heapdump.hprof"

# Disables explicit GC
JAVA_OPTS="$JAVA_OPTS -XX:+DisableExplicitGC"

# Ensure UTF-8 encoding by default (e.g. filenames)
JAVA_OPTS="$JAVA_OPTS -Dfile.encoding=UTF-8"

Then when Elasticsearch starts up it will look for this custom configuration script and start Java with the desired 4GB of memory.  This is one easy way to squeeze performance out of your server without making any other changes or modifying your server.

Modify Elasticsearch configuration

This one is also very easy to put in to place.  We are already using the elasticsearch.yml so the only thing that needs to be done is to create some additional configurations to this file, rebuild the container, and restart the ES container with the updated values.

A good setting to configure to help control ES memory usage is to set the indices field cache size.  Limiting this indices cache size makes sense because you rarely need to retrieve logs that are older than a few days.  By default ES will hold old indices in memory and will never let them go.  So unless you have unlimited memory than it makes sense to limit the memory in this scenario.

To limit the cache size simply add the following value anywhere in your custom elasticsearch.yml configuration file.  This setting and adjusting the Java heap memory size should be enough to get started but there are a few other things that might be worth checking.

indices.fielddata.cache.size:  40%

If you only make one change, add this line to your ES configuration!  This setting will let go of the oldest indices first so you won’t be dropping new indices, 9/10 times this is probably what you want when accessing data in Logstash.  More information about controlling memory usage can be found here.

Another idea worth looking at for an easy performance boost would be disabling swap if it has been enabled.  Again, in most cloud environment and images swap is turned off, but it is always a setting worth checking.

To bypass the OS swap setting you can simply configure a no swap value in ES by adding the following to your elasticsearch.yml configurtion file.

bootstrap.mlockall: true

To check that this has value has been configured properly you can run this command.

curl http://localhost:9200/_nodes/process?pretty

This may cause memory warnings when ES starts up (eg, nuable to lock JVM memory (ENOMEM). This can result in part of the JVM being swapped out. Increase RLIMIT_MEMLOCK (ulimit).) but you should be able to ignore these warnings.  If you are concerned, turn these limits off at the OS level which is demonstrated below.

Misc

Other low hanging fruit includes disabling open file limits on the OS.  ES can run in to problems if there is a cap on the amount of files that its processes can open or have open at a time.  I have run in to open file limit issues before and they are never fun to deal with.  This shouldn’t be an issue if you are running ES in a Docker container with the Ubuntu 14.04 base image.

If you aren’t sure about the open file limits for ES you can run the following command to get a better idea of the current limits.

ulimit -n

Make sure both the soft and hard limits are either set to unlimited or to an extremely high number.

This should take care of most if not all of the stability issues.  After putting these changes in place in my own environment I went from multiple crashes per day to so far none in over a week.  If you are still experiencing issues you might want to take a look at the ES production deployment guide for help or the #logstash and #elasticsearch IRC channels on freenode.

Read More

ELK stack

Running ELK on Docker

I wrote a post awhile back about how to get the ElasticSearch + Logstash + Kibana stack set up and recently have been very involved with Docker so thought it would be appropriate to update that post with the new Docker way of doing things.

Update (5/9/15) – I have created a github repo containing configs for running this.  Reader Sergio also has a solution posted a similar solution on github, you can check it out here if you want to try it out.

I found a surprising lack of posts describing how to run the ELK stack with Docker and docker-compose.  This post is much longer and more detailed than usual so feel free to jump around to different sections for details on different components.  I am planning on doing a follow up on to this post with instructions about how to configure Logstash and the logstash-forwarder client as well as Kibana to do interesting things with logs stored in ElasticsSearch.

There are a lot of other posts about how to get the stack to work but they are either out of date already since the Docker world changes so fast or don’t cover specific details of how different bit work.  The other thing I have observed is that most of the other guides are not done with Docker which is something that makes life easier.

So the first thing that I’ll cover is how to build the Docker images.  If you are interested I can make this stuff available on the Docker hub as images or public Dockerfiles.  However, for this article and in genreal I strongly prefer to write my own Dockerfiles so I will be posting my custom configs and files here rather than pulling other (sometimes official) prebuilt images.

Logstash

The first component we will get set up is the Logstash server.  This setup is also using the log-courier input plugin.  Log-courier is a more customizable and flexible client for forwarding logs to Logstash.  I use both logstash-forwarder and log-courier in this configuration to allow for a more flexible setup.

The following is a Dockerfile that will build a Logstash 1.5.0 image.  One thing to note about this approach is that you can swap out the LOGSTASH_VER and the image will be updated to the correct version automatically and will be ready to be deployed whem the image gets rebuilt.

FROM ubuntu:14.04

ENV DEBIAN_FRONTEND noninteractive
ENV LOGSTASH_VER 1.5.0.rc2
WORKDIR /opt

# Dependencies
RUN apt-get update -qq && \
 apt-get install -y -qq \
 wget \
 python \
 openjdk-7-jre-headless

# Install Logstash
RUN wget --quiet "https://download.elasticsearch.org/logstash/logstash/logstash-$LOGSTASH_VER.tar.gz" -O "/opt/logstash-$LOGSTASH_VER.tar.gz" --no-check-certificate && \
 tar zxf logstash-$LOGSTASH_VER.tar.gz && \
 mv logstash-$LOGSTASH_VER logstash

# Install plugins
RUN /opt/logstash/bin/plugin update logstash-output-zeromq
RUN /opt/logstash/bin/plugin install logstash-input-log-courier

# Config files
ADD server.conf /etc/logstash/server.conf
ADD logstash-forwarder.key /etc/logstash/logstash-forwarder.key
ADD logstash-forwarder.crt /etc/logstash/logstash-forwarder.crt

# lumberjack port
EXPOSE 4545
# log-courier port
EXPOSE 4546

CMD /opt/logstash/bin/logstash -f /etc/logstash/server.conf

This will install version 1.5.0.r2 the logstash-input-log-courier input for logstash, add certificates for the forwarding clients and start Logstash with the server configuration.

In addition to this Dockerfile you will need to generate some certificates for logstash-forwarder clients and the Logstash server itself to use, as well as the server configuration used by the Logstash server.  Below I have a sample but extremely barebones server.conf configuration file.

input {
  lumberjack {
    port => 4545
    ssl_certificate => "/etc/logstash/logstash-forwarder.crt"
    ssl_key => "/etc/logstash/logstash-forwarder.key"
    codec => plain {
      charset => "ISO-8859-1"
    }
  }

  courier {
    port => 4546
    ssl_certificate => "/etc/logstash/logstash-forwarder.crt"
    ssl_key => "/etc/logstash/logstash-forwarder.key"
  }
}

output {
  elasticsearch {
    cluster => "elasticsearch"
  }
}

I will thicken this config up in the next post on how to customize Logstash and doing interesting things with Kibana.  For now, we are defining a courier and lumberjack input, used to ingest logs in to Logstash as well as one output, which is telling Logstash what to do with the logs, in this example it is just stuffing them in to ES.

To generate the certificates needed for logstash/logstah-forwarder you can either follow the instructions on the logstash-forwarder github page or you can use the following command to generate the certs and subsequently inject them in to the Docker image.  It should probably go without saying but make sure the version of openssl used to generate these is an update to date and secure version.

openssl req -x509 -nodes -sha256-days 1095 -newkey rsa:2048 -keyout logstash-forwarder.key -out logstash-forwarder.crt

You will need to follow a few prompts to fill out the certificate details, again you can reference the logstash-forwarder github page if you get stuck or are unsure of how to configure the certificate.

After the certs are generated make sure that the names of the output file names match up to the names in the above Dockerfile and that is pretty much it for getting Logstash ready.

ElasticSearch

The ElasticSearch image is also pretty straight forward.  This will build the specified version from the ES_VER variable which is 1.4.4 currently and will configure a few things.

# Pull base image.
FROM dockerfile/java:oracle-java7

# Set install version
ENV ES_PKG_NAME elasticsearch-1.4.4

# Install ElasticSearch
RUN \
 cd / && \
 wget https://download.elasticsearch.org/elasticsearch/elasticsearch/$ES_PKG_NAME.tar.gz && \
 tar xvzf $ES_PKG_NAME.tar.gz && \
 rm -f $ES_PKG_NAME.tar.gz && \
 mv /$ES_PKG_NAME /elasticsearch

# Define mountable directories
VOLUME ["/data"]

# Define working directory
WORKDIR /data

# Custom ES config
ADD elasticsearch.yml /elasticsearch/config/elasticsearch.yml

# Define default command
CMD ["/elasticsearch/bin/elasticsearch"]

# Expose ports
EXPOSE 9200
EXPOSE 9300

The main key to getting ES to work is getting the configuration file set up correctly.  In this example we are mounting local storage (/data) from the host OS in to the container so that if the container dies the data and indexes and other data aren’t wiped out.  There are also a few other security configuration settings that get set here to lock things down and also to make Kibana 4 happy.

http.cors.allow-origin: "/.*/"
http.cors.enabled: true
cluster.name: elasticsearch
node.name: "logstash.domain.com"
path:
 data: /data/index
 logs: /data/log
 plugins: /data/plugins
 work: /data/work

ES is very straight forward to set up, you just set it up and it runs.

Kibana

This will build the newest iteration of Kibana, which is 4.0.0 as of this writing.  If you aren’t living on the bleeding edge and want to know how to get Kibana 3.x.x working let me know and I will post the configuration for it.

FROM ubuntu:14.04

# Dependencies
RUN apt-get update -qq
RUN sudo apt-get install -y -qq nginx-full wget vim

# Kibana
RUN mkdir -p /opt/kibana
RUN wget https://download.elasticsearch.org/kibana/kibana/kibana-4.0.0-linux-x64.tar.gz -O /tmp/kibana.tar.gz && \
 tar zxf /tmp/kibana.tar.gz && mv kibana-4.0.0-linux-x64/* /opt/kibana/

# Configs
ADD kibana.yml /opt/kibana/config/kibana.yml

EXPOSE 5601

CMD /opt/kibana/bin/kibana

So the Dockerfile is pretty straight forward but there were a few tidbits to be aware of.  Kibana 4.x.x if significantly different in how it works than 3.x.x so you will need to make a few adjustments if you are familiar with the old version.

You will need to pick and choose the bits out of the following configuration to suit your needs.  For example, you will need to adjust the elasticsearch_url, username, password and will need to decide whether to turn ssl on or off.  There are obviously more options but most of them probably don’t need to be adjusted for now.  Here is what the sample config looks like.

port: 5601
host: "0.0.0.0"
elasticsearch_url: "http://logstash.domain.com:9200"
elasticsearch_preserve_host: true
kibana_index: ".kibana"
default_app_id: "discover"
request_timeout: 300000
shard_timeout: 0
verify_ssl: false

# Plugins that are included in the build, and no longer found in the plugins/ folder
bundled_plugin_ids:
 - plugins/dashboard/index
 - plugins/discover/index
 - plugins/doc/index
 - plugins/kibana/index
 - plugins/markdown_vis/index
 - plugins/metric_vis/index
 - plugins/settings/index
 - plugins/table_vis/index
 - plugins/vis_types/index
 - plugins/visualize/index

That’s pretty much it, most of the difficulty of getting the new version of Kibana working is in the configuration so if you want to tweak things or if something isn’t working that is the first place to look.

Docker Compose (glue the pieces together)

This is an integral part of the setup.  This is what controls the different containers and what glues everything together.  Luckily it is easy to get set up and working.  If you aren’t familiar, docker-compose was recently rebranded from the old “fig” tool which has been branded as a Docker orchestration tool for running complex Docker applications easily.

The official docs are pretty good and detailed so you can visit their site if you have any questions about how to install or how to get any of the components working here.

The first step is to download and install docker compose.  Here I am using and Ubuntu system.

sudo pip install -U docker-compose

There are a few docker-compose command line commands to be familiar with which we’ll get to next, but first I will post the sample docker-compose configuration file to test out your ELK stack.

kibana:

 build: /home/<user>/elk/kibana/4.0.0
 restart: always
 ports:
 - "5601:5601"
 links:
 - "elasticsearch:elasticsearch"

elasticsearch:

 build: /home/<user>/elk/elasticsearch/1.4.4
 restart: always
 ports:
 - "9200:9200"
 - "9300:9300"
 volumes:
 - "/data:/data"

logstash:

 build: /home/<user>/elk/logstash/logstash-1.5.0
 restart: always
 ports:
 - "4545:4545"
 - "4546:4546"

Most of the configuration is straight forward.  Here are the main commands to get everything stitched to gether and working.

  • docker-compose build (from the directory the docker-compose.yml file is in)
  • docker-compose up (to test the stack)
  • docker-compose kill (bring it down)

After you have all the issues ironed out building and running and the stack is stable with no errors on start you can start up the stack in detached mode.

  • docker-compose up -d

Additionally, you can look at the logs if something smells fishy.

  • docker-compose logs

Design considerations

One thing that readers might wonder about is the scalability of this setup.  This will scale up very easily but not out.  However, this should be able to handle up to 100k events/second on the Logstash end so there will be other bottlenecks before the components (ES and Logstash) fall down.  I haven’t pushed my own setup this far yet but have been able to get to around 30k/sec before Logstash dies, which I’m still investigating.  Even with that amount of activity and Logstash choking, ES and Kibana don’t get affected.

So if you use this as a guide for a production setup I would recommend that you use a decently sized server, at least 4 CPU, 8 GB memory and adjust the memory and cpu options for the Logstash component if you plan on throwing a lot of logs at it (>30k/s).  I will revisit once I have worked out all the performance issues with some best practices for making Logstash run more smoothly.

I would be interested in knowing how to scale this out if anybody has any recommendations but this setup should scale up decently at least for most scenarios.  I have not played around with ES sharding across hosts but I imagine that it wouldn’t be super complicated, especially using container volume mounts to store the data and indexes at the hose OS level.

Read More

Mount an external EBS volume in AWS

Creating and attaching external volumes is one of those things in administration that is really nice to know how to do but for me is also one that doesn’t happen every day so it is really easy to forget how to do which makes it a little bit more painful, especially with deadlines and people watching over your shoulder.  So, having said that, I think it is probably worth writing a post about how to do it because it happens just enough that I have trouble getting everything straight, and I’m sure others run in to this as well, so that’s what I will be writing about today.

There is  good documentation for how to do this but there are a lot of separate steps so consolidating the components might be helpful to readers who stumble across this.  I’m sure there are other ways to accomplish this but I don’t think it is necessary to cover everything here.

Create your “floating volume”

This step is straight forward.  In the AWS EC2 console choose the type of volume this will be (SSD or magnetic), availability zone, and any other options here.

create ebs volume

After your volume has been created you will want to attach it to an instance.  This part is important because the changes could break your OS volume if you write to your fstab file incorrectly.  In this example I am choosing to attach the EBS volume as /dev/xvdf, but you could name it differently if it corresponds to your setup.

attach ebs volume

After the volume has been mounted you can check that it has been picked up by the OS by either checking the /dev directory or by running fdisk -l and looking for the size of the disk you just attached.

It is worth pointing out that all of the steps in the AWS console can alternatively be done with the aws-cli tool.  It is probably easier but for the sake of time and illustration I am leaving those steps out.  Feel free to reach out if you are interested in the cli tool and I can update this post.

If you run fdisk -l you will notice that the device is empty, so you will need to format the disk.  In this instance I am formatting the disk as ext4.  So use the following command to format it.

sudo mkfs.ext4 /dev/xvdf

After the volume has been formatted you can mount it to your OS.

Attaching the volume

sudo mkdir /data
sudo mount -t ext4 /dev/xvdf /data

If you need to resize the filesystem for whatever reason, you can use the resize2fs command.

sudo resize2fs <mount point>
sudo resize2fs /dev/xvdf

Here you will create the directory (if it doesn’t already exist) to mount the volume to and then mount it.  At this point it would be fine to be done if you just needed temporary access to the storage on this device.  But if you want your mount to persist and to survive a reboot then you just add an entry to your /etc/fstab file to make sure the /data directory gets the volume mounted to it after a reboot.  Something like the following would work.

/dev/xvdf       /data   ext4    defaults,nobootwait        0       0

The entry is pretty easy to follow but may be confusing for those who are not familiar with how fstab works.  I will break down the various components here.

The first parameter is the location of the volume (/dev/xvdf) and is referred to as the file_system field.

The second parameter specifies where to mount the volume to (/data) and is referred to as the dir field.

The third field is the type.  This is where you specify the file system type or device to be mounted.  If you didn’t format this volume previously it would crate problems for OS when it tried to load in your volume from this file.

The fourth section is the options for the mount.  Here, the defaults,nobootwait section is very important.  If you don’t have the nobootwait option specified here then your OS could potentially hang on boot up if it couldn’t find the specified volume, so this option helps escape it if there are any problems.

The fifth field is to either enable or disable the dump option.  Unless you are familiar with or use the dump command you will almost always set this to 0.

The last section is the pass section.  This simply tells the OS if it should run an fsck or not on this volume.  Here I have it set to 0 so it doesn’t get checked but for OS volumes, this could be important to turn on.

Next steps

There are many more things you can do with fstab so if you are interested in other options for how to mount volumes you can look at the fstab documentation for more insights and information.

If you ever wanted to float this volume to another host it would be easy to do, and would not require any new or special formatting since this was already taken care of here.  The steps would looks similar to the following.

  • Unmount volume from current OS
  • Remove entry in /etc/fstab for volume mount
  • Detach mount in AWS console from current OS
  • Attach mount to new OS
  • Mount volume manually in new OS to test if it works
  • If the mount works add a new entry in /etc/fstab
  • Done

So that’s pretty much it.  Hopefully this is useful for everybody.

Read More

Kubernetes resize and rolling updates

If you haven’t heard of or used Kubernetes yet, I highly recommend taking a look (see the link below).  I won’t take too much time here today to talk about the Kubernetes project because there is just too much to cover.  Instead I will be writing a series of posts about how to work with Kubernetes and share some tricks and tips that I have discovered in my experiences so far with the tool.  Since the project is still very young and moving incredibly quickly, the best place to get information is either the IRC channel (#google-containers), the mailing list, or their github project.  Please go look at the github project if you are new to Kubernetes, or are interested in learning more about it, especially their docs and examples sections.

As I said, updates and progress have been extremely fast paced, so it isn’t uncommon for things in the Kubernetes project to seem obselete before they have even been implemented.  For example, the command line tool for interacting with a Kubernetes cluster has already changed faces a few times, which was confusing to me when I first started out.  Kubecfg is on the way out and the project maintainers are working on removing old references to it.  On the flip side, the kubectl command is maturing quite nicely and will be around for awhile, along with the subcommand that I will be describing.

Now that I have all the basic background stuff out of the way; the version of kubectl I am using for this demonstration is v0.9.1.  If you just discovered Kubernetes or have been using kubecfg (as explained above) you will want to make sure to get more familiar with kubectl because it is the preferred tool going forward, at least at this point.

There are a few handy subcommands that come baked in to the kubectl command.  The first is the resize command.  This command allows you to scale the number of running containers being managed by Kubernetes up or down on the fly.  Obviously this can be really powerful!  The syntax is pretty straight forward and below I have an example listed.

kubectl resize –current-replicas=6 –replicas=0 rc test-rc

The –current-replicas argument is optional, the –replicas defines the *desired* number of replicas to have runing, rc specifies this is a replication controller, and finally, test-rc is the name of the replication controller to scale.   After you scale your replication controller you can check out the status quickly via the following command.

kubectl get pod

Another handy tool to have when working with Kubernetes is the ability to deploy new images as a rolling update.

 kubectl rollingupdate test-rc -f test-rc-2.yml –update-period=”10s”

The rollingupdate command takes a few arguments.  The first is the name of the current replication controller that you would like to update.  The second is to replace it with the yml file of the new replication controller and the third optional argument is the –update-period, which allows a user to override the default time that it takes to spin up a new container and spin down an old.

Below is an example of what your test-rc-2.yml file may look like.

kind: ReplicationController
apiVersion: v1beta1
id: test-rc-2
namespace: default
desiredState:
 replicas: 1
 replicaSelector:
   name: test-rc
   version: v2
 podTemplate:
 labels:
   name: test-rc
   version: v2
 desiredState:
 manifest:
 version: v1beta1
 id: test-rc
 containers:
   - name: test-image
   image: test/test:new-tag
   imagePullPolicy: PullAlways
 ports:
   - name: test-port
   containerPort: 8080

There are a few important things to notice.  The first is that the id must be unique, it can’t be a name that is already in use by another replication controller.  All of the label names should remain the same except for the version.  The version is used to signify the new replication controller is a running a new docker image.  The version number should be unique, which will help keep track of which image version is running.

Another thing to note.  If your original replication controller did not contain a unique key (like version) then you will need to update the original replication controller first, adding a unique key, before attempting to run the rolling update.

If both replication controllers don’t have the same format you will get an error similar to this.

test-rc.yml must specify a matching key with non-equal value in Selector for <selector name>

So that’s pretty much it for now.  I will revisit this post again in the future as new flags and subcommands are added to kubectl for managing and updating replication controllers.  I also plan on writing a few more posts about other aspects and areas of kubectl and running Kubernetes, so please check back soon!

Read More