I recently attended DevOps Days Portland, where Kelsey Hightower gave a nice Keynote about NoOps. I had heard of the terms NoOps in passing before the conference but never really thought much about it or its implications. Kelsey’s talk started to get me thinking more and more about the idea and what it means to the DevOps world.
For those of you who aren’t familiar, NoOps is a newer tech buzzword that has emerged to describe the concept that an IT environment can become so automated and abstracted from the underlying infrastructure that there is no need for a dedicated team to manage software in-house.
Obviously the term NoOps has caused some friction between the development world and operations/DevOps world because of its perceived meaning along with a very controversial article entitled “I Don’t Want DevOps. I Want NoOps.” that kicked the whole movement off and sparked the original debate back in 2011. The main argument from people who work in operations is that there will always be servers running somewhere, as a developer you can’t just magically make servers go away, which I agree with 100%. It is incredibly short sighted to assume that any environment can work in a way where operations in some form need not exist.
Interestingly though, if you dig into the goals and underlying meaning of NoOps, they are actually fairly reasonable to me when boiled down. Here are just a few of them, borrowed from the article and Kelsey’s talk:
Improve the process of deploying apps
Not just VM’s, release management as well
Developers don’t want to deal with operations
Developers don’t care about hardware
All of these goals seem reasonable to me as an operations person, especially not having to work with developers. Therefore, when I look at NoOps I don’t necessarily take the ACTUAL underlying meaning of it be to work against operations and DevOps, I look at it as developers trying to find a better way to get their jobs done, however misguided their wording and mindset. I also see NoOps, from an operations perspective as a shift in the mindset of how to accomplish goals, to improve processes and pipelines, which is something that is very familiar to people who have worked in DevOps.
Because of this perspective, I see an evolution in the way that operations and DevOps works that takes the best ideas from NoOps and applies them in practical ways. Ultimately, operations people want to be just as productive as developers and NoOps seems like a good set of ideas to get on the same page.
To be able to incorporate ideas from NoOps as cloud and distributed technologies continue to advance, operations folks need to embrace the idea of programming and automation in areas that have been traditionally managed manually as part of the day to day by operation folks in order to abstract away complicated infrastructure and make it easier for developers to accomplish their goals. Examples of these types of things may include automatically provisioning networks and VLAN’s or issuing and deploying certificates by clicking a button. As more of the infrastructure gets abstracted away, it is important for operations to be able to automate these tasks.
If anything, I think NoOps makes sense as a concept for improving the lives of both developers and operations, which is one facet that DevOps aims to help solve. So to me, the goals of NoOps are a good thing, even though there has been a lot of stigma about it. Just to reiterate, I think it is absurd for anybody to say that jobs of operations will going away anytime soon, the job and responsibilities are just evolving to fit the direction other areas of the business are moving. If anything, the skills of managing cloud infrastructure, automation and building robust systems will be in higher demand.
As an operations/DevOps person just remember to stay curious and always keep working on improving your skill set.
Docker monitoring and container monitoring in general is an area that has historically been difficult. There has been a lot of movement and progress in the last year or so to beef up container monitoring tools but in my experience the tools have either been expensive or difficult to configure and complicated to use. The combination or Rancher and Prometheus has finally given me hope. Now it is easy easy to setup and configure a distributed monitoring solution without paying a high price.
Prometheus has recently added support for Rancher via the Rancher exporter, which is great news. This is by far the easiest method I have discovered thus far for experimenting with Prometheus.
For those that don’t know much about Prometheus, it is an up and coming project created by engineers at Soundcloud which is hosted on Github. Prometheus is focused on monitoring, specifically focusing on container and Docker monitoring. Prometheus uses a polling based model for “scraping” metrics out of predefined endpoints. The Prometheus Rancher exporter enables Prometheus to scrape Rancher server specific metrics, which are very useful to have. To build on that, one other point worth mentioning here is that Prometheus has a very nice, flexible design built upon different client libraries in a similar way to Graphite, so adding support and instrumenting code for different types of platforms is easy to implement. Check out the list of exporters in the Prometheus docs for idea on how to get started exporting metrics.
This post won’t cover setting up Rancher server or any of the Rancher environment since it is well documented in other places. I won’t touch on alerting here either because I honestly haven’t had much time to dig into it much yet. So with that said, the first step I will focus on in this post is getting Prometheus set up and running. Luckily it is extremely easy to accomplish this using the Rancher catalog and the Prometheus template.
Once Prometheus has been bootstrapped and everything is up, test it out by navigating to the Grafana home dashboard created by the bootstrap process. Since this is a simple demo, my dashboard is located at the IP of the server using port 3000 which is the only port that should need to be publicly exposed if you are interested in sharing the Grafana dashboard.
The default Grafana credentials for this catalog template are admin/admin for the username and password, which is noted in the catalog notes found here. The Prometheus tools ship with some nice preconfigured dashboards, so after you have things set up, it is definitely worth checking out some of them.
If you look around the dashboards you will probably notice that metrics for the Rancher server aren’t available by default. To enable these metrics we need to configure Prometheus to connect to the Rancher API, as noted in the Rancher monitoring guide.
Navigate to http://<SERVER_IP>:8080/v1/settings/graphite.host on your Rancher server, then in the top right click edit, and then update the value there to point to the server address where InfluxDB was deployed to.
After this setting has been configured, restart the Rancher server container, wait a few minutes and then check Grafana.
As you can see, metrics are now flowing in the the dashboard.
Now that we have the basics configured, we can drill down in to individual containers to get a more granular view of what is happening in the environment. This type of granularity is great because it gives a very detailed view of what exactly is going on inside our environment and gives us an easy way to share visuals with other team members. Prometheus offers a web interface to interact with the query language and visual results, which is useful to help figure out what kinds of things to visualize in Grafana.
Navigate to the server that the Prometheus server container is deployed to on port 9090. You should see a screen similar to the following.
There is documentation about how to get started with using this tool, so I recommend taking a look and playing around with it yourself. Once you find some useful metrics, visualized in the graph view, grab the query used to generate the graph and add a new dashboard to Grafana.
Prometheus offers a lot of power and flexibility and is a great tool for monitoring. I am still very new to Prometheus but so far it looks very promising and I have to say I’m really impressed with the amount of polish and detail I was able to get in just an afternoon of experimenting. I will be updating this post as I get more exposure to Prometheus and get more metrics and monitoring set up so stay tuned.
This post is mostly a reference for folks that are interested in adding a little bit of extra polish and functionality to the stock version of Vim. The plugin system in Vim is a little bit confusing at first but is really powerful once you get past the initial learning curve. I know this topic has been covered a million times but having a centralized reference for how to set up each plugin is a little bit harder to find.
Below I have highlighted a sample list of my favorite Vim plugins. I suggest that you go try as many plugins that you can to figure out what suits your needs and workflow best. The following plugins are the most useful to me, but certainly I don’t think will be the best for everybody so use this post as a reference to getting started with plugins and try some out to decide which ones are the best for your own environment.
This is a package manager of sorts for Vim plugins. Vundle allows you to download, install, search and otherwise manage plugins for Vim in an easy and straight forward way.
To get started with Vundle, put the following configuration at THE VERY TOP of your vimrc.
set nocompatible " be iMproved, requiredfiletype off " requiredset rtp+=~/.vim/bundle/Vundle.vim
call vundle#rc()
"" let Vundle manage Vundle
Bundle 'gmarik/Vundle.vim'
...
Then you need to clone the Vundle project in to the path specified in the vimrc from above.
Now you can install any other defined plugins from within Vim by running :BundleInstall. This should trigger Vundle to start downloading/updating its list of plugins based on your vimrc.
To install additional plugins, update your vimrc with the plugins you want to install, similar to how Vundle installs itself as shown below.
Customizing the look and feel of Vim is a very personal experience. Luckily there are a lot of options to choose from.
The vim-colorschemes plugin allows you to pick from a huge list of custom color schemes that users have put together and published. As illustrated above you can simply add the repo to your vimrc to gain access to a large number of color options. Then to pick one just add the following to your vimrc (after the Bundle command).
colorscheme xoria256
Next time you open up Vim you should see color output for the scheme you like.
Syntastic is a fantastic syntax highlighter and linting tool and is easily the best syntax checker I have found for Vim. Syntastic offers support for tons of different languages and styles and even offers support for third party syntax checking plugins.
Here is how to install and configure Syntastic using Vundle. The first step is to ddd Syntastic to your vimrc,
There are a few basic settings that also need to get added to your vimrc to get Syntastic to work well.
" Syntastic statusline
set statusline+=%#warningmsg#
set statusline+=%{SyntasticStatuslineFlag()}
set statusline+=%*
" Sytnastic settings
let g:syntastic_always_populate_loc_list = 1
let g:syntastic_auto_loc_list = 1
let g:syntastic_check_on_open = 1
let g:syntastic_loc_list_height=5
let g:syntastic_check_on_wq = 0
" Better symbols
let g:syntastic_error_symbol = 'XX'
let g:syntastic_warning_symbol = '!!'
That’s pretty much it. Having a syntax highlighter and automatic code linter has been a wonderful boon for productivity. I have saved myself so much time chasing down syntax errors and other bad code. I definitely recommend this tool.
This plugin is an autocompletion tool that adds tab completion to Vim, giving it a really nice IDE feel. I’ve only tested YCM out for a few weeks now but have to say it doesn’t seem to slow anything down very much at all, which is nice. An added bonus to using YCM with Syntastic is that they work together so if there are problems with the functions entered by YCM, Syntastic will pick them up.
Here are the installation instructions for Vundle. The first thing you will need to do is add a Vundle reference to your vimrc.
"" Autocomplete
Bundle 'Valloric/YouCompleteMe'
Then, in Vim, run :BundleInstall – this will download the git repo for YouCompleteMe. Once the repo is downloaded you will need a few other tools installed to get things working correctly. Check the official documentation for installation instruction for your system. On OS X you will need to have Python, cmake, MacVim and clang support.
xcode-select --install
brew install cmake
Then, to install YouCompleteMe.
cd ~/.vim/bundle/YouCompleteMe
git submodule update --init --recursive (not needed if you use Vundle)
./install.py --clang-completer
Highlights pesky whitespace automatically. This one is really useful to just have on in the background to help you catch whitespace mistakes. I know I make a lot of mistakes with regards to missing whitespace so having this is just really nice.
These tools are useful for file management and traversal. These plugins become more powersful when you work with a lot of files and move around different directories a lot. There is some debate about whether or not to use nerdtree in favor of the built in netrw. Nonetheless, it is still worth checking out different file browsers and see how they work.
Check out Vim Unite for a sort of hybrid file manager for fuzzy finding like ctrlp with additional functionality, like the ability to grep files from within Vim using a mapped key.
This is a shell and bash linting tool that integrates with vim and is great. Bash is notoriously difficult to read and debug and the shellcheck tools helps out with that a lot.
Install shellcheck on your system and syntastic will automatically pick up the installation and automatically do its linting whenever you save a file. I have been writing a lot of bash lately and the shellcheck tool has been a godsend for catching mistakes, and especially useful in Vim since it runs all the time.
By combining the powers of a good syntax highlighter and a good solid understanding of Bash you should be able to be that much more productive once you get used to having a build in to syntax and style checker for your scripts.
I have created a Github project that has basic instructions for getting started. You can take a look over there for ideas of how all of this works and to get ideas for your own set up.
I used the following links as reference for my approach to Dockerizing Sentry.
If you have configurations to use, it is probably a good idea to start from there. You can check my Github repo for what a basic configuration looks like. If you are starting from scratch or are using version 7.1.x or above you can use the “sentry init” command to generate a skeleton configuration to work from.
For this setup to work you will need the following prebuilt Docker images/containers. I suggest using something simple like docker-compose to stitch the containers together.
NOTE: If you are running this on OS X you may need to do some trickery and give special permission on the host (mac) level e.g. create ~/docker/postgres directory and give it the correct permission (I just used 777 recursively for testing, make sure to lock it down if you put this in production).
I wrote a little script in my Github project that will take care of setting up all of the directories on the host OS that need to be set up for data to persist. The script also generates a self signed cert to use for proxying Sentry through Nginx. Without the certificate, the statistics pages in the Sentry web interface will be broken.
To run the script, run the following command and follow the prompts. Also make sure you have docker-compose installed beforehand to run all the needed command.
sudo ./setup.sh
The certs that get generated are self signed so you will see the red lock in your browser. I haven’t tried it yet but I imagine using Let’s Encrytpt to create the certificates would be very easy. Let me know if you have had any success generating Nginx certs for Docker containers, I might write a follow up post.
Preparing Postgres
After setting up directories and creating certificates, the first thing necessary to getting up and going is to add the Sentry superuser to Postgres (at least 9.4). To do this, you will need to fire up the Postgres container.
docker-compose up -d postgres
Then to connect to the Postgres DB you can use the following command.
docker-compose run postgres sh -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'
Once you are logged in to the Postgres container you will need to set up a few Sentry DB related things.
First, create the role.
CREATE ROLE sentry superuser;
And then allow it to login.
ALTER ROLE sentry WITH LOGIN;
Create the Sentry DB.
CREATE DATABASE sentry;
When you are done in the container, \q will drop out of the postgresql shell.
After you’re done configuring the DB components you will need to “prime” Sentry by running it a first time. This will probably take a little bit of time because it also requires you to build and pull all the other needed Docker images.
docker-compose build
docker-compose up
You will quickly notice if you try to browse to the Sentry URL (e.g. the IP/port of your Sentry container or docker-machine IP if you’re on OS X) that you will get errors in the logs and 503’s if you hit the site.
Repair the database (if needed)
To fix this you will need to run the following command on your DB to repair it if this is the first time you have run through the set up.
docker-compose run sentry sentry upgrade
The default Postgres database username and password is sentry in this setup, as part of the setup the upgrade prompt will ask you got create a new user and password, and make note of what those are. You will definitely want to change these configs if you use this outside of a test or development environment.
After upgrading/preparing the database, you should be able to bring up the stack again.
docker-compose up -d && docker-compose logs
Now you should be able to get to the Sentry URL and start configuring . To manage the username/password you can visit the /admin url and set up the accounts.
Next steps
The Sentry server should come up and allow you in but will likely need more configuration. Using the power of docker-compose it is easy to add in any custom configurations you have. For example, if you need to adjust sentry level configurations all you need to do is edit the file in ./sentry/sentry.conf.py and then restart the stack to pick up the changes. Likewise, if you need to make changes to Nginx or celery, just edit the configuration file and bump the stack – using “docker-compose up -d”.
I have attempted to configure as many sane defaults in the base config to make the configuration steps easier. You will probably want to check some of the following settings in the sentry/sentry.conf.py file.
SENTRY_ADMIN_EMAIL – For notifications
SENTRY_URL_PREFIX – This is especially important for getting stats working
SENTRY_ALLOW_ORIGIN – Where to allow communications from
ALLOWED_HOSTS – Which hosts can communicate with Sentry
If you have the SENTRY_URL_PREFIX set up correctly you should see something similar when you visit the /queue page, which indicates statistics are working.
If you want to set up any kind of email alerting, make sure to check out the mail server settings.
docker-compose.yml example file
The following configuration shows how the Sentry stack should look. The meat of the logic is in this configuration but since docker-compose is so flexible, you can modify this to use any custom commands, different ports or any other configurations you may need to make Sentry work in your own environment.
The Dockerfiles for each of these component are fairly straight forward. In fact, the same configs can be used for the Sentry, Celery and Celerybeat services.
Sentry
# Kombu breaks in 2.7.11
FROM python:2.7.10
# Set up sentry user
RUN groupadd sentry && useradd --create-home --home-dir /home/sentry -g sentry sentry
WORKDIR /home/sentry
# Sentry dependencies
RUN pip install \
psycopg2 \
mysql-python \
supervisor \
# Threading
gevent \
eventlet \
# Memcached
python-memcached \
# Redis
redis \
hiredis \
nydus
# Sentry
ENV SENTRY_VERSION 7.7.4
RUN pip install sentry==$SENTRY_VERSION
# Set up directories
RUN mkdir -p /home/sentry/.sentry \
&& chown -R sentry:sentry /home/sentry/.sentry \
&& chown -R sentry /var/log
# Configs
COPY sentry.conf.py /home/sentry/.sentry/sentry.conf.py
#USER sentry
EXPOSE 9000/tcp 9001/udp
# Making sentry commands easier to run
RUN ln -s /home/sentry/.sentry /root
CMD sentry --config=/home/sentry/.sentry/sentry.conf.py start
Since the customized Sentry config is rather lengthy, I will point you to the Github repo again. There are a few values that you will need to provide but they should be pretty self explanatory.
Once the configs have all been put in to place you should be good to go. A bonus piece would be to add an Upstart service that takes care of managing the stack if the server either gets rebooted or the containers manage to get stuck in an unstable state. The configuration is a fairly easy thing to do and many other guides and posts have been written about how to accomplish this.
The Let’s Encrypt client was recently renamed to “certbot”. I have updated the post to use the correct name but if I miss something use certbot or let me know.
With the announcement of the public beta of the Let’s Encrypt project, it is now nearly trivial to get your site set up with an SSL certificate. One of the best parts about the Let’s Encrypt project is that it is totally free, so there is pretty much no reason to protect your blog set up with an SSL certificate. The other nice part of Let’s Encrypt is that it is very easy to get your certificate issued.
The first step to get started is grabbing the latest source code from GitHub for the project. Log on to your WordPress server (I’m running Ubuntu) and clone the repo. Make sure to install git if you haven’t already.
There is a shell script you can run to pretty much do everything for you, including installation of any packages and libraries it needs as well as configures paths and other components it needs to work.
cd certbot
./certbot-auto
After the bootstrap is done there should be some CLI options. Run the command with the -h flag to print out help.
./certbot-auto -h
Since I am using Apache for my blog I will use the “–apache” option.
./certbot-auto --apache
There will be some prompts you need to go through for setting up the certificates and account creation.
This process is still somewhat error prone, so if you make a typo you can just rerun the “./letsencrypt-auto” command and follow the prompts.
The certificates will be dropped in to /etc/letsencrypt/live/<website>. Go double check them if needed.
This process will also generate a new apache configuration file for you to use. You can check for the file in /etc/apache2/site-enabled. The import part of this config should look similar to the following:
<VirtualHost *:443>
UseCanonicalName Off
ServerAdmin webmaster@localhost
DocumentRoot /var/www/wordpress
SSLCertificateFile /etc/letsencrypt/live/thepracticalsysadmin.com/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/thepracticalsysadmin.com/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
SSLCertificateChainFile /etc/letsencrypt/live/thepracticalsysadmin.com/chain.pem
</VirtualHost>
As a side note, you will probably want to redirect non https requests to use the encrypted connection. This is easy enough to do, just go find your .htaccess file (mine was in /var/www/wordpress/.htaccess) and add the following rules.
Before we restart Apache with the new configuration let’s run a quick configtest to make sure it all works as expected.
apachectl configtest
If everything looks okay in the configtest then you can reload or restart apache.
service apache2 restart
Now when you visit your site you should get the nice shiny green lock icon on the address bar. It is important to remember that the certificates issued by the Let’s Encrypt project are valid for 90 days so you will need to make sure to keep up to date and generate new certificates every so often. The Let’s Encrypt folks are working on automating this process but for now you will need to manually generate new certificates and reload your web server.
That’s it. Your site should now be functioning with SSL.
Updating the certificate automatically
To take this process one step further We can make a script that can be run via cron (or manually) to update the certificate.
Here’s what the script looks like.
#!/usr/bin/env bash
dir="/etc/letsencrypt/live/example.com"
acme_server="https://acme-v01.api.letsencrypt.org/directory"
domain="example.com"
https="--standalone-supported-challenges tls-sni-01"
# Using webroot method
#/root/letsencrypt/certbot-auto --renew certonly --server $acme_server -a webroot --webroot-path=$dir -d $domain --agree-tos
# Using standalone method
service apache2 stop
# Previously you had to specify options to renew the cert but this has been deprecated
#/root/letsencrypt/certbot-auto --renew certonly --standalone $https -d $domain --agree-tos
# In newer versions you can just use the renew command
/root/letsencrypt/certbot-auto renew --quiet
service apache start
Notice that I have the “webroot” method commented out. I run a service (Varnish) on port 80 that proxies traffic but also interferes with LE so I chose to run the standalone renewal method. It is pretty easy, the main difference is that you need to turn off Apache before you run it since Apache binds to to ports 80/443. But the downtime is okay in my case.
I chose to put the script in to a cron job and have it run every 45 days so that I don’t have to worry about logging on to my server to regenerate the certificate. Here’s what a sample crontab for this job might look like.
0 0 */45 * * /root/renew_cert.sh
This is a straight forward process and will help with your search engine juices as well.