Introduction to Logstash+ElasticSearch+Kibana

There are a few problems with the current state of logging.  The first is that there is no real unified or agreed upon standard for how to do logging, across software platforms, so it is typically left up to the software designer to choose how to design and output logs.  Because of this non standardized approach, there are many many different formats that logs can become.  Obviously this is an issue if you are attempting to gather useful and meaningful data from a variety of different sources.  Because of this large number of log types and formats, numerous logging tools have been created, all trying to solve a certain type of logging problem, and so selecting one tool that offers everything can quickly become a chore.  The other big problem is that logs can produce an overwhelming amount of information.  Many of the traditional tools do nothing to correlate and represent the data that they collect.  Therefore, narrowing down specific issues can also become very difficult.

Logstash solves both of these problems in its own way.  First, it does a great job of abstracting out a lot of the difficulty with log collection and management.  So for example, you need to collect MySQL logs, Apache logs, and syslogs on a system.  Logstash doesn’t discriminate, you just tell what Logstash to expect and what to expect and it will go ahead and process those logs for you.  With ElasticSearch and Kibana, you can quickly gather useful information by searching through logs and identifying patterns and anomalies in your data.

The goal of this post will be to take readers through the process of getting up and running, starting from scratch all the way up into a working example.  Feel free to skip through any of the various sections if you are looking for something specific.  I’d like to mention quickly that this post covers the steps to configuring Logstash 1.4.0 on an Ubuntu 13.10 system with a log forwarding client on anything you’d like.  You *may* run into issues if you are trying these steps on different versions or Linux distributions.

Installing the pieces:

We will start by installing all of the various pieces that work together to create our basic centralized logging server.  The architecture can be a little bit confusing at first, here is a diagram from the Logstash docs to help.

Logstash architecture

Each of the following components do a specific task:

  • Java – Runtime Environment that Logstash uses to run.
  • Logstash – Collects and processes the logs coming into the system.
  • ElasticSearch – This is what stores, indexes and allows for searching the logs.
  • Redis – This is used as a queue and broker to feed messages and logs to logstash.
  • Kibana – Web interface for searching and analyzing logs stored by ES.

Java

sudo apt-get install openjdk-7-jre

Logstash

cd ~
curl -O https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz
tar zxvf logstash-1.4.2.tar.gz

ElasticSearch (Logstash is picky about which version gets installed)

wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.3.2.deb
dpkg -i elasticsearch-1.3.2.deb

Redis

apt-get install redis-server

Kibana

cd ~
wget https://download.elasticsearch.org/kibana/kibana/kibana-3.0.0.tar.gz
tar xvfz kibana-3.0.0.tar.gz

Nginx

sudo apt-get install nginx

Configuring the pieces:

This is an important component in a successful setup because there are a lot of different moving parts here.  If something isn’t working correctly you will want to double check that you have all of your configs setup correctly.

Redis

This step will bind redis to your public interface, so that other servers can connect to it.  Find the line in the /etc/redis-server/redis.conf file

bind 127.0.0.1

and change it to the following:

bind 0.0.0.0

we need to restart redis for it to pick up our change:

sudo service redis-server restart

ElasticSearch

Find the lines in the /etc/elasticsearch/elasticsearch.yml file and change them to the following:

cluster.name: elasticsearch
node.ame: "logstash test"

and restart elasticsearch:

sudo service elasticsearch restart

You can test your config and elasticsearch by browsing to the name/IP of the host and its port http://192.168.1.200:9200

Logstash server

We need to create a config here for the central Logstash.  Let’s create a file called /etc/logstash/server.conf and input the following:

input {
  redis {
    host => "192.168.1.200"
    type => "redis"
    data_type => "list"
    key => "logstash"
  }
}
output {
stdout { }
  elasticsearch {
    cluster => "elasticsearch"
  }
}

Replace host with the local IP address of you redis server, in this case it is on the same host as logstash.

FInally, fire up the Logstash server with the following command:

logstash/bin/logstash --verbose -f /etc/logstash/server.conf

Kibana

You will need to navigate to your Kibana files.  From the installation steps above we chose ~/kibana-3.0.0.  So to get everything working we need to edit a file named config.js in the Kibana directory to point it to the correct host.  Change it from:

elasticsearch: "http://"+window.location.hostname+":9200"

to

elasticsearch: "http://192.168.1.200:9200"

Nginx

We are almost done.  We just have to configure nginx to point to our Kibana website.  To do this we need to copy the kibana directory to the default webserver root.

mkdir /var/www
cp -R ~/kibana-3.0.0 /var/www/kibana

Finally we edit edit the /etc/nginx/sites-enabled/default file and find the following:

root /usr/share/nginx/www;

and change it to read as follows:

root /var/www/kibana;

now restart Nginx:

sudo service nginx restart

Now you should be able to open up a browser and navigate to either http://localhost or to your IP address and get a nice web GUI for Kibana.

Logstash client

We’re almost finished.  We just need to configure the client to forward some logs over to our *other* Logstash server.  Follow the instruction for downloading Logstash as we did earlier on the centralized logging server.  Once you have the files ready to go you need to create a config for the client server.

Again we will create a config file.  This time it will be /etc/logstash/agent.conf and we will use the following configuration:

input {
  file {
    type => "apache-access"
    path => "/var/log/apache2/other_vhosts_access.log"
  }

  file {
    type => "apache-error"
    path => "/var/log/apache2/error.log"
  }
}

filter {
  grok {
    match => { "message" => "%{COMBINEDAPACHELOG}" }
  }
  date {
  match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}

output {
  stdout { }
  redis {
    host => "192.168.1.200"
    data_type => "list"
    key => "logstash"
  }
}

Let’s fire up our client with the following command:

logstash/bin/logstash --verbose -f /etc/logstash/server.conf agent

If you switch back to your browser and wait a few minutes you should start seeing some logs being displayed.  If you start seeing logs coming in and displayed on the event chart you have it working!

Kibana interface

Conclusion

As I have learned with everything, there are always caveats.  For example, I was getting some strange errors on my client endpoint whenever I ran the logstash agent to forward logs to the central logstash server.  It turned out that Java didn’t have enough RAM and CPU assigned to it to begin with.  You need to be aware that you may run into seemingly random problems if you don’t allocate enough resources to the machine.

Another quick tip when you are encountering issues or things just aren’t working correctly is to turn on verbosity (which we have done in our example) which will enable you to gather some clues to help identify more specific problems you are having.

Resources

http://www.slashroot.in/logstash-tutorial-linux-central-logging-server
http://logstash.net/docs/1.4.0/

Josh Reichardt

Josh is the creator of this blog, a system administrator and a contributor to other technology communities such as /r/sysadmin and Ops School. You can also find him on Twitter and Facebook.