After the recent heart bleed incident (which I’m sure many of you well remember) I had to reassign some certificates. It turns out that this was a great opportunity to create a blog post. Since I do not create and assign certs very frequently it is a good opportunity to take some notes and hopefully ease the process for others. After patching the vulnerable version of Openssl, there are really only a few steps needed to accomplish this. Assuming you already have nginx installed, which is trivial to do on Ubuntu, the first step is to create the necessary crt and key files.
Next you will need to tell nginx to load up you new certs in its config. Here is an example of what the server block in you /etc/nginx/site-available config might look like. Notice the ssl_certificate and ssl_certificate_key files correspond to the cert files we created above, which we stuck in the /etc/nginx directory. If you decide to place these certs in a different location you will need to modify your config file to reflect the location.
Just to cover all our bases here we will also redirect any requests that come in to port 80 (default web) back to 443 for ssl. The is a simple addition and will add an additional layer of security.
server { listen 80; return 301 https://$host$request_uri; }
The final step is to reload your configuration and test to make sure everything works.
sudo service nginx reload
If your nginx fails to reload, more than likely there is some sort of configuration or syntax error in your config file. Comb through it for any potential errors or mistakes. Once your config is loaded properly you can check your handy work by attempting to hit your site using http://. If your config is working properly it should automatically redirect you to https://.
That’s all it takes. I think it might be a good exercise to try something like this with Chef but for now this process works okay by hand. Let me know what you think or if this can be improved.
There are a few problems with the current state of logging. The first is that there is no real unified or agreed upon standard for how to do logging, across software platforms, so it is typically left up to the software designer to choose how to design and output logs. Because of this non standardized approach, there are many many different formats that logs can become. Obviously this is an issue if you are attempting to gather useful and meaningful data from a variety of different sources. Because of this large number of log types and formats, numerous logging tools have been created, all trying to solve a certain type of logging problem, and so selecting one tool that offers everything can quickly become a chore. The other big problem is that logs can produce an overwhelming amount of information. Many of the traditional tools do nothing to correlate and represent the data that they collect. Therefore, narrowing down specific issues can also become very difficult.
Logstash solves both of these problems in its own way. First, it does a great job of abstracting out a lot of the difficulty with log collection and management. So for example, you need to collect MySQL logs, Apache logs, and syslogs on a system. Logstash doesn’t discriminate, you just tell what Logstash to expect and what to expect and it will go ahead and process those logs for you. With ElasticSearch and Kibana, you can quickly gather useful information by searching through logs and identifying patterns and anomalies in your data.
The goal of this post will be to take readers through the process of getting up and running, starting from scratch all the way up into a working example. Feel free to skip through any of the various sections if you are looking for something specific. I’d like to mention quickly that this post covers the steps to configuring Logstash 1.4.0 on an Ubuntu 13.10 system with a log forwarding client on anything you’d like. You *may* run into issues if you are trying these steps on different versions or Linux distributions.
Installing the pieces:
We will start by installing all of the various pieces that work together to create our basic centralized logging server. The architecture can be a little bit confusing at first, here is a diagram from the Logstash docs to help.
Each of the following components do a specific task:
Java – Runtime Environment that Logstash uses to run.
Logstash – Collects and processes the logs coming into the system.
ElasticSearch – This is what stores, indexes and allows for searching the logs.
Redis – This is used as a queue and broker to feed messages and logs to logstash.
Kibana – Web interface for searching and analyzing logs stored by ES.
Java
sudo apt-get install openjdk-7-jre
Logstash
cd ~
curl -O https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz
tar zxvf logstash-1.4.2.tar.gz
ElasticSearch (Logstash is picky about which version gets installed)
This is an important component in a successful setup because there are a lot of different moving parts here. If something isn’t working correctly you will want to double check that you have all of your configs setup correctly.
Redis
This step will bind redis to your public interface, so that other servers can connect to it. Find the line in the /etc/redis-server/redis.conf file
bind 127.0.0.1
and change it to the following:
bind 0.0.0.0
we need to restart redis for it to pick up our change:
sudo service redis-server restart
ElasticSearch
Find the lines in the /etc/elasticsearch/elasticsearch.yml file and change them to the following:
You will need to navigate to your Kibana files. From the installation steps above we chose ~/kibana-3.0.0. So to get everything working we need to edit a file named config.js in the Kibana directory to point it to the correct host. Change it from:
We are almost done. We just have to configure nginx to point to our Kibana website. To do this we need to copy the kibana directory to the default webserver root.
Finally we edit edit the /etc/nginx/sites-enabled/default file and find the following:
root /usr/share/nginx/www;
and change it to read as follows:
root /var/www/kibana;
now restart Nginx:
sudo service nginx restart
Now you should be able to open up a browser and navigate to either http://localhost or to your IP address and get a nice web GUI for Kibana.
Logstash client
We’re almost finished. We just need to configure the client to forward some logs over to our *other* Logstash server. Follow the instruction for downloading Logstash as we did earlier on the centralized logging server. Once you have the files ready to go you need to create a config for the client server.
Again we will create a config file. This time it will be /etc/logstash/agent.conf and we will use the following configuration:
If you switch back to your browser and wait a few minutes you should start seeing some logs being displayed. If you start seeing logs coming in and displayed on the event chart you have it working!
Conclusion
As I have learned with everything, there are always caveats. For example, I was getting some strange errors on my client endpoint whenever I ran the logstash agent to forward logs to the central logstash server. It turned out that Java didn’t have enough RAM and CPU assigned to it to begin with. You need to be aware that you may run into seemingly random problems if you don’t allocate enough resources to the machine.
Another quick tip when you are encountering issues or things just aren’t working correctly is to turn on verbosity (which we have done in our example) which will enable you to gather some clues to help identify more specific problems you are having.
This post will detail the steps to have Jenkins automatically create a build if it detects changes to a GitHub repository. This can be a very useful improvement to your continuous integration setup with Jenkins because this method is only telling Jenkins to attempt a new build when a change is detected rather than polling on an interval, which can be a little bit inefficient.
There are a few steps necessary to get this process working correctly that I would like to highlight in case I have to do this again or if anybody else would like to set this up. Most of the guides that I found were very out of date so their instructions were a little bit unclear and misleading.
The first step is to configure Jenkins to talk to GitHub. You will need to download and install the GitHub plugin (I am using version 1.8 as of this writing). Manage Jenkins -> Manage Plugins -> Available -> GitHub plugin
After this is installed you can either create a new build or configure an existing build job. Since I already have one set up I will just modify it to use the GitHub hook. There are a few things that need to be changed.
First, you will need to add your github repo:
Then you will then have to tick the box indicated below – “Build when a change is pushed to GitHub”
Also note that Jenkins should have an SSH key already associated with the desired GitHub project.
You’re pretty close to being done. The final step is to head over to GitHub and adjust the settings for the project by creating a webhook for your Jenkins server. Select the repo you’re interested in and click Settings. If you aren’t an admin of the repo you will not be able to modify the settings, so talk to an owner to either finish this step for you or have them grant you admin to make the change yourself.
The GitHub steps are pretty straight forward. Open the “Webhooks & Services” tab -> choose “Configure Services” -> find the Jenkins (GitHub plugin option) and fill it in with a similar URL to the following:
http://<Name of Jenkins server>:8080/github-webhook/
Make sure to tick the active box and ensure it works by running the “Test Hook”. If it comes back with a payload deployed message you should be good to go.
UPDATE
I found an issue that was causing us issues. There is a check box near the bottom of the authentication section labeled “Prevent Cross Site Request Forgery exploits” that needs to be unchecked in order for this particular method to work.
Let me know if you have any issues, I haven’t found a good way to debug or test outside of the message returned from the GitHub configuration page. I did find another alternative method that may work but didn’t need to use it so I can update this if necessary.
If you want more details about web hooks you can check out these resources:
Since landing myself in a new and unexplored terrain as a freshly minted DevOps admin, I have been thinking a lot about what exactly DevOps is and how I will translate my skills moving into the position. I am very excited to have the opportunity to work in such a new and powerful area of IT (and at such a sweet company!) but really think I need to lay out some of the groundwork behind what DevOps is, to help strengthen my own understanding and hopefully to help others grasp some of the concepts and ideas behind it.
I have been hearing more and more about DevOps philosophy and its growing influence and adoption in the world of IT, especially in fast paced, cloud and start up companies. From what I have seen so far, I think I people really need to start looking at the impact that DevOps is making in the realm of system administration and how to set themselves up to succeed in this profession moving forward.
Here is the official DevOps description on Wikipedia:
DevOps is a software development method that stresses communication, collaboration and integration between software developers and information technology (IT) professionals. DevOps is a response to the interdependence of software development and IT operations. It aims to help an organization rapidly produce software products and services.
While this is a solid description, there still seems to be a large amount of confusion about what exactly DevOps is so I’d like to address some of the key ideas and views that go along with its mentality and application to system administration. To me, DevOps can be thought of as a combination of the best practices that a career in operations has to offer with many of the concepts and ideas that are used in the world of development. Especially those derived from Agile and Scrum.
The great thing about DevOps is that since it is so new, there is really no universally accepted definition of what it is limited to. This means that those who are currently involved in the DevOps development and adoption are essentially creating a new discipline, adding to it as they go. A current DevOps admin can be described in simple terms as a systems admin that works closely with developers to decrease the gap between operations and development. But that is not the main strength that DevOps offers and really just hits the tip of the ice burg for what DevOps actually is and means.
For one, DevOps offers a sort of cultural shift in the IT environment. Traditionally in IT landscapes, there has been somewhat of a divide between operations and development. You can think of this divide as a wall built between the dev and the ops teams either due to siloing of job skills and responsibilities or how the organization at broader perspective operates. Because of this dissection of duties, there is typically little to no overlap between the tool sets or thought process between the dev or ops teams, which can cause serious headaches trying to get products out the door.
So how do you fix this?
In practical application, the principals of DevOps can put into practice using things like Continuous Integration tools , configuration management, logging and monitoring, creating a standardized test, dev and QA environments, etc. The DevOps mindset and culture has many of its roots in environments of rapid growth and change. An example of this philosophy put in to practice is at start up companies that rely on getting their product to market as quickly as smoothly as possible. The good news is that larger enterprise IT environments are beginning to look at some of the benefits of this approach and starting to tear down the walls of the silos.
Some of the benefits of DevOps include:
Increased stability in your environment (embracing config management and version control)
Faster resolution of problems (decrease MTT)
Continuous software delivery (increasing release frequency brings ideas to market faster)
Much faster software development life cycles
Quicker interaction and feedback loops for key business stakeholders
Automate otherwise cumbersome and tedious tasks to free up time for devs and ops teams
These are some powerful concepts. And the benefits here cannot be underestimated because at the end of the day the company you work for is in the business of making money. And the faster they can make changes to become more marketable and competitive in the market the better.
One final topic I’d like to cover is programming. If you are even remotely interested in DevOps you should learn to program, if you don’t know already. This is the general direction of the discipline and if you don’t have a solid foundation to work from you will not be putting yourself into the best position to progress your career. This doesn’t mean you have to be a developer, but IMO you have have to at least know and understand what the developers are talking about. It is also very useful to know programming for all of the various scripting and automation tasks that are involved in DevOps. Not only will you be able to debug issues with other software, scripts and programs but you will be a much more valuable asset to your team if you can be trusted to get things done and help get product shipped out the door.
This is a HOWTO build your own instance-store backed AMI image which is suitable for creating a Paid AMI. The motivation for doing this HOWTO is simple: I tried it, and it has a lot of little gotchas, so I want some notes for myself. This HOWTO assumes you’re familiar with launching EC2 instances, logging into them, and doing basic command line tasks.
Choosing a starting AMI
There’s a whole ton of AMIs available for use with EC2, but not quite so many which are backed by instance-store storage. Why’s that? Well, EBS is a lot more flexible and scalable. The instance-store images have a fairly limited size for their root partition. For my use case, this isn’t particularly important, and for many use cases, it’s trivial to mount some EBS volumes for persistant storage.
Amazon provides some of their Amazon Linux AMIs which are backed by EBS or instance-store, but they’re based on CentOS, and frankly, I’ve had so much troubles with CentOS in the past, that I just prefer my old standby: Ubuntu. Unfortunately, I had a lot of trouble finding a vanilla Ubuntu 12.04 LTS instance-store backed image through the AWS Console. They do exist, however, and they’re provided by Canonical. Thanks guys!
Conveniently, there’s a Launch button right there for each AMI instance. Couldn’t be easier!
Installing the EC2 Tools
Once you’ve got an instance launched and you’re logged in and sudo‘d to root, you’ll need to install the EC2 API and AMI tools provided by Amazon. The first step is, of course, to download them. Beware! The tools available through the Ubuntu multiverse repositories are unfortunately out of date.
Lastly we have to set the EC2_HOME and the JAVA_HOME environment variables for the EC2 tools to work properly. I like to do this by editing /etc/bash.bashrc so anyone on the machine can use the tools without issue.
Once we log out and back in, those variables will be set, and the EC2 tools will be working.
# exit
$ sudo su
# ec2-version
1.6.7.4 2013-02-01
Customizing Your AMI
At this point, your machine should be all set for you to do whatever customization you need to do. Install libraries, configure boot scripts, create users, get your applications set up, anything at all. Once you’ve got a nice, stable (rebootable) machine going, then you can image it.
Bundling, Uploading and Registering your AMI
This is actually pretty easy, but I’ll still go through it. The Amazon documentation is fairly clear, and I recommend following along with that as well, as it explains all the options to each command.
Bundle your instance image. The actual image bundle and manifest will end up in /tmp.
cd /tmp/cert
ec2-bundle-vol -k <private_keyfile> -c <certificate_file> \
-u <user_id> -e <cert_location>
cd /tmp
Upload your bundled image. Note that <your-s3-bucket> should include a path that is unique to this image, such as my-bucket/ami/ubuntu/my-ami-1, otherwise things will get very messy for you, because an image consists of an image.manifest.xml file and many chunks which compose the image itself, which are generically named by default when you use this tool.
ec2-upload-bundle -b <your-s3-bucket> -m <manifest_path> \
-a <access_key> -s <secret_key>