Exploring Linux Terminals

I have been experimenting with Linux on a recently acquired Lenovo X1 Carbon and as part of the process I have been exploring and testing out various productivity tools, including terminal emulators.

The first thing I discovered in this process is that there’s a lot of terminals out there. And they’re all slightly different, and in my experience, almost none of them were able to do all of the tweaks I like.

Here is the list of terminals I have tested out (so far).

As you can see, that is a bunch of terminals. To keep things short, in my exploration and testing, the best 3 I found were as follows.

Alacritty – 3rd

It is fast, defaults are great, works perfectly with the tiling window managers. This one would would probably have been first on the list if it was able to provide a blinking cursor but as I found with many of the options there seemed to almost always be a gotcha.

A few high points for this terminal, written in Rust, it uses the GPU to offload rendering, cross platform, and maybe my favorite feature, the configuration file is yaml based. Docs are also good and the community is really taking off so I have a strong feeling this one will continue improving.

urxvt – 2nd

The biggest problem with this terminal for me is that it is painful to configure. There are no preferences configured out of the box, which to some is preferred, but also, digging through old blog posts and perl scripts is not how I like to spend my time.

This terminal is second on the list because it was the only other terminal that was able to do all of the small little tweaks and adjustments that suit my preferences. And it is fast and light weight, which are good things.

Here is the configuration I ended up with if interested.

Kitty – 1st

This terminal was a clear winner. It was able to do all of my custom tweaks and settings and because it has nice defaults I only needed to add about 10 lines of extra configuration.

This terminal has a lot of other stuff going for it. It is written in Rust, which makes it fast, it uses the GPU to offload rendering, which also makes it fast, easy to configure, and for the most part just works. Only gotcha I found was that I needed to explicitly turn on copy to selection in my configuration, but that was easy.

Here is the configuration I ended up with if interested.

Conclusion

I plan on keeping this list updated to some extent as I find more Terminals to try out. As you can see, there are many different options and seem to be more and more all the time.

Your experience may differ so obviously take these musings with a pinch of salt, and please do look through the various options and try things out to see if they will work for you. That said, I do think I have a specific enough use case that these recommendations should be helpful in guiding most users.

Here is the repo with all of my various configurations if you want to check something out. There were a few terminal configs that didn’t end up there just because they were too minimal or I didn’t like them.

Read More

Chef data bags with Test Kitchen

As a step towards integrating your Chef cookbooks with Jenkins CI and your testing/release pipeline it is important to make sure that local changes pass unit and integration tests before being accepted and committed into version control.  For example, when running test kitchen it is important to fully simulate what data bags and encrypted data bags are doing on a local box for many tests to pass correctly.  So, today I would like to focus on a stumbling block towards Jenkins and integration testing that I ran in to recently.  There are a few lessons that I learned along the way that I would like to share to help clarify things a little bit because there wasn’t much good info out there on how to do this.

First, I need to give credit where it is due.  This post was a great resource in my journey to find a solution to my test kitchen data bag issue.

The largest roadblock I found along the way was that the version of test kitchen I was using was being shipped with chef-solo as the primary driver.  There has been a lot of discussion around this topic lately and (from what I understand) has pretty much become the general consensus within the Chef community that chef-solo should be replaced by chef-zero.  There are a number of advantages to using chef-zero instead of chef-solo, including a lesson I learned the hard way, which is that chef-zero has the ability to act as a stand alone Chef server – unlocking the ability to store data bags and encrypted data bags without having to do any sort of wacky hacking to get Chef to compile and converge correctly.

There was a good post written recently that expounds more on the benefits of using chef-zero instead of chef-solo.  It is here, and is definitely worth the read if you are interested in learning more about the benefits of chef-zero.

So with that knowledge in mind, here is what a newly updated sample .kitchen.yml file might look like:

--- 
driver: 
 name: vagrant 
 
provisioner: 
 name: chef_zero 
 
platforms: 
 - name: ubuntu-13.10-i386 
 - name: centos-6.4-i386 
 
suites: 
 - name: default 
 data_bags_path: "test/integration/data_bags" 
 run_list: 
 - recipe[recipe-to-test] 
 attributes:

It’s a pretty straight forward config.  The biggest change that you will notice in this config is that instead of using chef-solo as the provisioner it has been changed to chef-zero – I now know that it makes all the difference in the world.  The next big change to observe is the data_bags_path in the suites section.  This bit of configuration basically tells the Chef provisioner to go look at the specified file path when chef-zero spins up and use that to store data bag, encrypted data bag or other information that potentially would live on the Chef server that client’s would use.

So in the test/integration/data_bags directory I have a directory and json file inside that directory for the specific data I am interested in, called sensu/ssl.json.  This file essentially contains the same information that is stored on the Chef server about the ssl certificates used for live hosts in the production environment, just mirrored into a sandbox/integration testing environment.

If you’re interested, here is a sample of what the  ssl.json file might look like:

{ 
 "id": "ssl", 
 "server": { 
 "key": "-----BEGIN RSA PRIVATE KEY-----gM
 "cert": "-----BEGIN CERTIFICATE-----gM
 "cacert": "-----BEGIN CERTIFICATE-----gM
 }, 
 "client": { 
 "key": "-----BEGIN RSA PRIVATE KEY-----gM
 "cert": "-----BEGIN CERTIFICATE-----gM
 } 
}

Note that the “id” is “ssl”.  As far as I know the file name must match up to the id when you are creating this json file.

Now you should be able to create and converge your test recipe with test kitchen:

kitchen create ubuntu
kitchen converge ubuntu

If you have any difficulty, let me know.  I tried to be thorough in this write up but could have accidentally skipped important information.  The main keys or takeaways though should be 1) use chef-zero wherever possible and 2) make sure you have your data bag paths and files created correctly and referenced correctly in your .kitchen.yml file.  Finally, if you are still having issues, make sure you have triple checked the spelling and json syntax of your paths and configs.

Read More

Using a self signed cert with Nginx

After the recent heart bleed incident (which I’m sure many of you well remember) I had to reassign some certificates. It turns out that this was a great opportunity to create a blog post.  Since I do not create and assign certs very frequently it is a good opportunity to take some notes and hopefully ease the process for others.  After patching the vulnerable version of Openssl, there are really only a few steps needed to accomplish this.  Assuming you already have nginx installed, which is trivial to do on Ubuntu, the first step is to create the necessary crt and key files.

sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/cert.key -out /etc/nginx/cert.crt

Next you will need to tell nginx to load up you new certs in its config.  Here is an example of what the server block in you /etc/nginx/site-available config might look like.  Notice the ssl_certificate and ssl_certificate_key files correspond to the cert files we created above, which we stuck in the /etc/nginx directory.  If you decide to place these certs in a different location you will need to modify your config file to reflect the location.

server {

listen *:443; 
ssl on; 
ssl_certificate cert.crt; 
ssl_certificate_key cert.key; 
ssl_session_timeout 5m; 
ssl_protocols SSLv3 TLSv1; 
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; 
ssl_prefer_server_ciphers on;

}

Just to cover all our bases here we will also redirect any requests that come in to port 80 (default web) back to 443 for ssl.  The is a simple addition and will add an additional layer of security.

server { 
listen 80; 
return 301 https://$host$request_uri; 
}

The final step is to reload your configuration and test to make sure everything works.

sudo service nginx reload

If your nginx fails to reload, more than likely there is some sort of configuration or syntax error in your config file.  Comb through it for any potential errors or mistakes.  Once your config is loaded properly you can check your handy work by attempting to hit your site using http://.  If your config is working properly it should automatically redirect you to https://.

That’s all it takes.  I think it might be a good exercise to try something like this with Chef but for now this process works okay by hand.  Let me know what you think or if this can be improved.

Read More

Introduction to Logstash+ElasticSearch+Kibana

There are a few problems with the current state of logging.  The first is that there is no real unified or agreed upon standard for how to do logging, across software platforms, so it is typically left up to the software designer to choose how to design and output logs.  Because of this non standardized approach, there are many many different formats that logs can become.  Obviously this is an issue if you are attempting to gather useful and meaningful data from a variety of different sources.  Because of this large number of log types and formats, numerous logging tools have been created, all trying to solve a certain type of logging problem, and so selecting one tool that offers everything can quickly become a chore.  The other big problem is that logs can produce an overwhelming amount of information.  Many of the traditional tools do nothing to correlate and represent the data that they collect.  Therefore, narrowing down specific issues can also become very difficult.

Logstash solves both of these problems in its own way.  First, it does a great job of abstracting out a lot of the difficulty with log collection and management.  So for example, you need to collect MySQL logs, Apache logs, and syslogs on a system.  Logstash doesn’t discriminate, you just tell what Logstash to expect and what to expect and it will go ahead and process those logs for you.  With ElasticSearch and Kibana, you can quickly gather useful information by searching through logs and identifying patterns and anomalies in your data.

The goal of this post will be to take readers through the process of getting up and running, starting from scratch all the way up into a working example.  Feel free to skip through any of the various sections if you are looking for something specific.  I’d like to mention quickly that this post covers the steps to configuring Logstash 1.4.0 on an Ubuntu 13.10 system with a log forwarding client on anything you’d like.  You *may* run into issues if you are trying these steps on different versions or Linux distributions.

Installing the pieces:

We will start by installing all of the various pieces that work together to create our basic centralized logging server.  The architecture can be a little bit confusing at first, here is a diagram from the Logstash docs to help.

Logstash architecture

Each of the following components do a specific task:

  • Java – Runtime Environment that Logstash uses to run.
  • Logstash – Collects and processes the logs coming into the system.
  • ElasticSearch – This is what stores, indexes and allows for searching the logs.
  • Redis – This is used as a queue and broker to feed messages and logs to logstash.
  • Kibana – Web interface for searching and analyzing logs stored by ES.

Java

sudo apt-get install openjdk-7-jre

Logstash

cd ~
curl -O https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz
tar zxvf logstash-1.4.2.tar.gz

ElasticSearch (Logstash is picky about which version gets installed)

wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.3.2.deb
dpkg -i elasticsearch-1.3.2.deb

Redis

apt-get install redis-server

Kibana

cd ~
wget https://download.elasticsearch.org/kibana/kibana/kibana-3.0.0.tar.gz
tar xvfz kibana-3.0.0.tar.gz

Nginx

sudo apt-get install nginx

Configuring the pieces:

This is an important component in a successful setup because there are a lot of different moving parts here.  If something isn’t working correctly you will want to double check that you have all of your configs setup correctly.

Redis

This step will bind redis to your public interface, so that other servers can connect to it.  Find the line in the /etc/redis-server/redis.conf file

bind 127.0.0.1

and change it to the following:

bind 0.0.0.0

we need to restart redis for it to pick up our change:

sudo service redis-server restart

ElasticSearch

Find the lines in the /etc/elasticsearch/elasticsearch.yml file and change them to the following:

cluster.name: elasticsearch
node.ame: "logstash test"

and restart elasticsearch:

sudo service elasticsearch restart

You can test your config and elasticsearch by browsing to the name/IP of the host and its port http://192.168.1.200:9200

Logstash server

We need to create a config here for the central Logstash.  Let’s create a file called /etc/logstash/server.conf and input the following:

input {
  redis {
    host => "192.168.1.200"
    type => "redis"
    data_type => "list"
    key => "logstash"
  }
}
output {
stdout { }
  elasticsearch {
    cluster => "elasticsearch"
  }
}

Replace host with the local IP address of you redis server, in this case it is on the same host as logstash.

FInally, fire up the Logstash server with the following command:

logstash/bin/logstash --verbose -f /etc/logstash/server.conf

Kibana

You will need to navigate to your Kibana files.  From the installation steps above we chose ~/kibana-3.0.0.  So to get everything working we need to edit a file named config.js in the Kibana directory to point it to the correct host.  Change it from:

elasticsearch: "http://"+window.location.hostname+":9200"

to

elasticsearch: "http://192.168.1.200:9200"

Nginx

We are almost done.  We just have to configure nginx to point to our Kibana website.  To do this we need to copy the kibana directory to the default webserver root.

mkdir /var/www
cp -R ~/kibana-3.0.0 /var/www/kibana

Finally we edit edit the /etc/nginx/sites-enabled/default file and find the following:

root /usr/share/nginx/www;

and change it to read as follows:

root /var/www/kibana;

now restart Nginx:

sudo service nginx restart

Now you should be able to open up a browser and navigate to either http://localhost or to your IP address and get a nice web GUI for Kibana.

Logstash client

We’re almost finished.  We just need to configure the client to forward some logs over to our *other* Logstash server.  Follow the instruction for downloading Logstash as we did earlier on the centralized logging server.  Once you have the files ready to go you need to create a config for the client server.

Again we will create a config file.  This time it will be /etc/logstash/agent.conf and we will use the following configuration:

input {
  file {
    type => "apache-access"
    path => "/var/log/apache2/other_vhosts_access.log"
  }

  file {
    type => "apache-error"
    path => "/var/log/apache2/error.log"
  }
}

filter {
  grok {
    match => { "message" => "%{COMBINEDAPACHELOG}" }
  }
  date {
  match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}

output {
  stdout { }
  redis {
    host => "192.168.1.200"
    data_type => "list"
    key => "logstash"
  }
}

Let’s fire up our client with the following command:

logstash/bin/logstash --verbose -f /etc/logstash/server.conf agent

If you switch back to your browser and wait a few minutes you should start seeing some logs being displayed.  If you start seeing logs coming in and displayed on the event chart you have it working!

Kibana interface

Conclusion

As I have learned with everything, there are always caveats.  For example, I was getting some strange errors on my client endpoint whenever I ran the logstash agent to forward logs to the central logstash server.  It turned out that Java didn’t have enough RAM and CPU assigned to it to begin with.  You need to be aware that you may run into seemingly random problems if you don’t allocate enough resources to the machine.

Another quick tip when you are encountering issues or things just aren’t working correctly is to turn on verbosity (which we have done in our example) which will enable you to gather some clues to help identify more specific problems you are having.

Resources

http://www.slashroot.in/logstash-tutorial-linux-central-logging-server
http://logstash.net/docs/1.4.0/

Read More

Setting up a GitHub webhook in Jenkins

This post will detail the steps to have Jenkins automatically create a build if it detects changes to  a GitHub repository.  This can be a very useful improvement to your continuous integration setup with Jenkins because this method is only telling Jenkins to attempt a new build when a change is detected rather than polling on an interval, which can be a little bit inefficient.

There are a few steps necessary to get this process working correctly that I would like to highlight in case I have to do this again or if anybody else would like to set this up.  Most of the guides that I found were very out of date so their instructions were a little bit unclear and misleading.

The first step is to configure Jenkins to talk to GitHub.  You will need to download and install the GitHub plugin (I am using version 1.8 as of this writing).  Manage Jenkins -> Manage Plugins -> Available -> GitHub plugin

GitHub plugin

After this is installed you can either create a new build or configure an existing build job.  Since I already have one set up I will just modify it to use the GitHub hook.  There are a few things that need to be changed.

First, you will need to add your github repo:

source code management

Then you will then have to tick the box indicated below – “Build when a change is pushed to GitHub”

build when github changes

Also note that Jenkins should have an SSH key already associated with the desired GitHub project.

You’re pretty close to being done.  The final step is to head over to GitHub and adjust the settings for the project by creating a webhook for your Jenkins server.  Select the repo you’re interested in and click Settings.  If you aren’t an admin of the repo you will not be able to modify the settings, so talk to an owner to either finish this step for you or have them grant you admin to make the change yourself.

The GitHub steps are pretty straight forward.  Open the “Webhooks & Services” tab -> choose “Configure Services” -> find the Jenkins (GitHub plugin option) and fill it in with a similar URL to the following:

  • http://<Name of Jenkins server>:8080/github-webhook/

jenkins webhook

Make sure to tick the active box and ensure it works by running the “Test Hook”.  If it comes back with a payload deployed message you should be good to go.

UPDATE

I found an issue that was causing us issues.  There is a check box near the bottom of the authentication section labeled “Prevent Cross Site Request Forgery exploits” that needs to be unchecked in order for this particular method to work.

Disable forgery option

Let me know if you have any issues, I haven’t found a good way to debug or test outside of the message returned from the GitHub configuration page.  I did find another alternative method that may work but didn’t need to use it so I can update this if necessary.

If you want more details about web hooks you can check out these resources:

Read More