hello

DevOps Conferences

I did a post quite awhile ago that highlighted some of the cooler system admin and operations oriented conferences that I had on my radar at that time.  Since then I have changed jobs and am now currently in a DevOps oriented position, so I’d like to revisit the subject and update that list to reflect some of the cool conferences that are in the DevOps space.

I’d like to start off by saying first that even if you can’t make it to the bigger conferences, local groups and meet ups are also an excellent way to get out and meet other professionals that do what you do. Local groups are also an excellent way to stay in the loop on what’s current and also learn about what others are doing.  If you are interested in eventually becoming a presenter or speaker, local meet ups and groups can be a great way to get started.  There are numerous opportunities and communities (especially in bigger cities), check here for information or to see if there is a DevOps meet up near you.  If there is nothing near by, start one!  If you can’t find any DevOps groups look for Linux groups or developer groups and network from there, DevOps is beginning to become popular in broader circles.

After you get your feet wet with meet ups, the next place to start looking is conferences that sound like they might be interesting to you.  There are about a million different opportunities to choose from, from security conferences, developer conferences, server and network conferences, all the way down the line.  I am sticking with strictly DevOps related conferences because that is currently what I am interested and know the best.

Feel free to comment if I missed any conferences that you think should be on this list.

DevOps Days (Multiple dates)

Perhaps the most DevOps centric of all the conference list.  These conferences are a great way to meet with fellow DevOps professionals and network with them.  The space and industry is changing constantly and being on top of all of the changes is crucial to being successful.  Another nice thing about the DevOps days is that they are spread out around the country (and world) and spread out throughout the year so they are very accessible.  WARNING:  DevOps days are not tied to any one set of DevOps tools but rather the principles and techniques and how to apply them to different environments.  If you are looking for super in depth technical talks, this one may not be for you.

ChefConf (March)

The main Chef conference.  There are large conferences for the main configuration management tools but I chose to highlight Chef because that’s what we use at my job.  There are lots of good talks that have a Chef centered theme but also are great because the practices can be applied with other tools.  For example, there are many DevOps themes at ChefConf including continuous integration and deployment topics, how to scale environments, tying different tools together and just general configuration management techniques.  Highly recommend for Chef users, feel free to substitute the other big configuration management tool conferences here if Chef isn’t your cup of tea (Salt, Puppet, Ansible).

CoreOS Fest (May)

  • 2015 videos haven’t been posted yet

Admittedly, this is a much smaller and niche conference but is still awesome.  The conference is the first one put on by the folks at CoreOS and was designed to help the community keep up with what is going on in the CoreOS and container world.  The venue is pretty small but the content at this years conference was very good.  There were some epic announcements and talks at this years conference, including Tectonic announcements and Kubernetes deep dives, so if container technology is something you’re interested in then this conference would definitely be worth checking out.

Velocity (May)

This one just popped up on my DevOps conference radar.  I have been hearing good things about this conference for awhile now but have not had the opportunity to go to it.  It always has interesting speakers and topics and a number of the DevOps thought leaders show up for this event.  One cool thing about this conference is that there are a variety of different topics at any one time so it offers a nice, wide spectrum of information.  For example, there are technical tracks covering different areas of DevOps.

DockerCon (June)

Docker has been growing at a crazy pace so this seems like the big conference to go check out if you are in the container space.  This conference is similar to CoreOS fest but focuses more heavily on topics of Docker (obviously).  I haven’t had a chance to go to one of these yet but containers and Docker have so much momentum it is very difficult to avoid.  As well, many people believe that container technologies are going to be the path to the future so it is a good idea to be as close to the action as you can.

Monitorama (June)

This is one of the coolest conferences I think, but that is probably just because I am so obsessed with monitoring and metrics collection.  Monitoring seems to be one of those topics that isn’t always fun to deal with or work around but talks and technologies at this conference actually make me excited about monitoring.  To most, monitoring is a necessary evil and a lot of the content from this conference can help make your life easier and better in all aspects of monitoring, from new trends and tools to topics on how to correctly monitor and scale infrastructures.  Talks can be technical but well worth it, if monitoring is something that interests you.

AWS Re:Invent (November)

This one is a monster.  This is the big conference that AWS puts on every year to announce new products and technologies that they have been working on as well as provide some incredibly helpful technical talks.  I believe this conference is one of the pricier and more exclusive conferences but offers a lot in the way of content and details.  This conference offers some of the best, most technical topics of discussion that I have seen and has been invaluable as a learning resource.  All of the videos from the conference are posted on YouTube so you can get access to this information for free.  Obviously the content is related to AWS but I have found this to be a great way to learn.

Conclusion

Even if you don’t have a lot of time to travel or get out to these conferences, nearly all of them post video from the event so you can watch it whenever you want to.  This is an INCREDIBLE learning tool and resource that is FREE.  The only downside to the videos is that you can’t ask any questions, but it is easy to find the presenters contact info if you are interested and feel like reaching out.

That being said, you tend to get a lot more out of attending the conference.  The main benefit of going to conferences over watching the videos alone is that you get to meet and talk to others in the space and get a feel for what everybody else is doing as well as check out many cool tools that you might otherwise never hear about.  At every conference I attend, I always learn about some new tech that others are using that I have never heard of that is incredibly useful and I always run in to interesting people that I would otherwise not have the opportunity to meet.

So definitely if you can, get out to these conferences, meet and talk to people, and get as much out of them as you can.  If you can’t make it, check out the videos afterwards for some really great nuggets of information, they are a great way to keep your skills sharp and current.

If you have any more conferences to add to this list I would be happy to update it!  I am always looking for new conferences and DevOps related events.

Read More

ELK stack

Running ELK on Docker

I wrote a post awhile back about how to get the ElasticSearch + Logstash + Kibana stack set up and recently have been very involved with Docker so thought it would be appropriate to update that post with the new Docker way of doing things.

Update (5/9/15) – I have created a github repo containing configs for running this.  Reader Sergio also has a solution posted a similar solution on github, you can check it out here if you want to try it out.

I found a surprising lack of posts describing how to run the ELK stack with Docker and docker-compose.  This post is much longer and more detailed than usual so feel free to jump around to different sections for details on different components.  I am planning on doing a follow up on to this post with instructions about how to configure Logstash and the logstash-forwarder client as well as Kibana to do interesting things with logs stored in ElasticsSearch.

There are a lot of other posts about how to get the stack to work but they are either out of date already since the Docker world changes so fast or don’t cover specific details of how different bit work.  The other thing I have observed is that most of the other guides are not done with Docker which is something that makes life easier.

So the first thing that I’ll cover is how to build the Docker images.  If you are interested I can make this stuff available on the Docker hub as images or public Dockerfiles.  However, for this article and in genreal I strongly prefer to write my own Dockerfiles so I will be posting my custom configs and files here rather than pulling other (sometimes official) prebuilt images.

Logstash

The first component we will get set up is the Logstash server.  This setup is also using the log-courier input plugin.  Log-courier is a more customizable and flexible client for forwarding logs to Logstash.  I use both logstash-forwarder and log-courier in this configuration to allow for a more flexible setup.

The following is a Dockerfile that will build a Logstash 1.5.0 image.  One thing to note about this approach is that you can swap out the LOGSTASH_VER and the image will be updated to the correct version automatically and will be ready to be deployed whem the image gets rebuilt.

FROM ubuntu:14.04

ENV DEBIAN_FRONTEND noninteractive
ENV LOGSTASH_VER 1.5.0.rc2
WORKDIR /opt

# Dependencies
RUN apt-get update -qq && \
 apt-get install -y -qq \
 wget \
 python \
 openjdk-7-jre-headless

# Install Logstash
RUN wget --quiet "https://download.elasticsearch.org/logstash/logstash/logstash-$LOGSTASH_VER.tar.gz" -O "/opt/logstash-$LOGSTASH_VER.tar.gz" --no-check-certificate && \
 tar zxf logstash-$LOGSTASH_VER.tar.gz && \
 mv logstash-$LOGSTASH_VER logstash

# Install plugins
RUN /opt/logstash/bin/plugin update logstash-output-zeromq
RUN /opt/logstash/bin/plugin install logstash-input-log-courier

# Config files
ADD server.conf /etc/logstash/server.conf
ADD logstash-forwarder.key /etc/logstash/logstash-forwarder.key
ADD logstash-forwarder.crt /etc/logstash/logstash-forwarder.crt

# lumberjack port
EXPOSE 4545
# log-courier port
EXPOSE 4546

CMD /opt/logstash/bin/logstash -f /etc/logstash/server.conf

This will install version 1.5.0.r2 the logstash-input-log-courier input for logstash, add certificates for the forwarding clients and start Logstash with the server configuration.

In addition to this Dockerfile you will need to generate some certificates for logstash-forwarder clients and the Logstash server itself to use, as well as the server configuration used by the Logstash server.  Below I have a sample but extremely barebones server.conf configuration file.

input {
  lumberjack {
    port => 4545
    ssl_certificate => "/etc/logstash/logstash-forwarder.crt"
    ssl_key => "/etc/logstash/logstash-forwarder.key"
    codec => plain {
      charset => "ISO-8859-1"
    }
  }

  courier {
    port => 4546
    ssl_certificate => "/etc/logstash/logstash-forwarder.crt"
    ssl_key => "/etc/logstash/logstash-forwarder.key"
  }
}

output {
  elasticsearch {
    cluster => "elasticsearch"
  }
}

I will thicken this config up in the next post on how to customize Logstash and doing interesting things with Kibana.  For now, we are defining a courier and lumberjack input, used to ingest logs in to Logstash as well as one output, which is telling Logstash what to do with the logs, in this example it is just stuffing them in to ES.

To generate the certificates needed for logstash/logstah-forwarder you can either follow the instructions on the logstash-forwarder github page or you can use the following command to generate the certs and subsequently inject them in to the Docker image.  It should probably go without saying but make sure the version of openssl used to generate these is an update to date and secure version.

openssl req -x509 -nodes -sha256-days 1095 -newkey rsa:2048 -keyout logstash-forwarder.key -out logstash-forwarder.crt

You will need to follow a few prompts to fill out the certificate details, again you can reference the logstash-forwarder github page if you get stuck or are unsure of how to configure the certificate.

After the certs are generated make sure that the names of the output file names match up to the names in the above Dockerfile and that is pretty much it for getting Logstash ready.

ElasticSearch

The ElasticSearch image is also pretty straight forward.  This will build the specified version from the ES_VER variable which is 1.4.4 currently and will configure a few things.

# Pull base image.
FROM dockerfile/java:oracle-java7

# Set install version
ENV ES_PKG_NAME elasticsearch-1.4.4

# Install ElasticSearch
RUN \
 cd / && \
 wget https://download.elasticsearch.org/elasticsearch/elasticsearch/$ES_PKG_NAME.tar.gz && \
 tar xvzf $ES_PKG_NAME.tar.gz && \
 rm -f $ES_PKG_NAME.tar.gz && \
 mv /$ES_PKG_NAME /elasticsearch

# Define mountable directories
VOLUME ["/data"]

# Define working directory
WORKDIR /data

# Custom ES config
ADD elasticsearch.yml /elasticsearch/config/elasticsearch.yml

# Define default command
CMD ["/elasticsearch/bin/elasticsearch"]

# Expose ports
EXPOSE 9200
EXPOSE 9300

The main key to getting ES to work is getting the configuration file set up correctly.  In this example we are mounting local storage (/data) from the host OS in to the container so that if the container dies the data and indexes and other data aren’t wiped out.  There are also a few other security configuration settings that get set here to lock things down and also to make Kibana 4 happy.

http.cors.allow-origin: "/.*/"
http.cors.enabled: true
cluster.name: elasticsearch
node.name: "logstash.domain.com"
path:
 data: /data/index
 logs: /data/log
 plugins: /data/plugins
 work: /data/work

ES is very straight forward to set up, you just set it up and it runs.

Kibana

This will build the newest iteration of Kibana, which is 4.0.0 as of this writing.  If you aren’t living on the bleeding edge and want to know how to get Kibana 3.x.x working let me know and I will post the configuration for it.

FROM ubuntu:14.04

# Dependencies
RUN apt-get update -qq
RUN sudo apt-get install -y -qq nginx-full wget vim

# Kibana
RUN mkdir -p /opt/kibana
RUN wget https://download.elasticsearch.org/kibana/kibana/kibana-4.0.0-linux-x64.tar.gz -O /tmp/kibana.tar.gz && \
 tar zxf /tmp/kibana.tar.gz && mv kibana-4.0.0-linux-x64/* /opt/kibana/

# Configs
ADD kibana.yml /opt/kibana/config/kibana.yml

EXPOSE 5601

CMD /opt/kibana/bin/kibana

So the Dockerfile is pretty straight forward but there were a few tidbits to be aware of.  Kibana 4.x.x if significantly different in how it works than 3.x.x so you will need to make a few adjustments if you are familiar with the old version.

You will need to pick and choose the bits out of the following configuration to suit your needs.  For example, you will need to adjust the elasticsearch_url, username, password and will need to decide whether to turn ssl on or off.  There are obviously more options but most of them probably don’t need to be adjusted for now.  Here is what the sample config looks like.

port: 5601
host: "0.0.0.0"
elasticsearch_url: "http://logstash.domain.com:9200"
elasticsearch_preserve_host: true
kibana_index: ".kibana"
default_app_id: "discover"
request_timeout: 300000
shard_timeout: 0
verify_ssl: false

# Plugins that are included in the build, and no longer found in the plugins/ folder
bundled_plugin_ids:
 - plugins/dashboard/index
 - plugins/discover/index
 - plugins/doc/index
 - plugins/kibana/index
 - plugins/markdown_vis/index
 - plugins/metric_vis/index
 - plugins/settings/index
 - plugins/table_vis/index
 - plugins/vis_types/index
 - plugins/visualize/index

That’s pretty much it, most of the difficulty of getting the new version of Kibana working is in the configuration so if you want to tweak things or if something isn’t working that is the first place to look.

Docker Compose (glue the pieces together)

This is an integral part of the setup.  This is what controls the different containers and what glues everything together.  Luckily it is easy to get set up and working.  If you aren’t familiar, docker-compose was recently rebranded from the old “fig” tool which has been branded as a Docker orchestration tool for running complex Docker applications easily.

The official docs are pretty good and detailed so you can visit their site if you have any questions about how to install or how to get any of the components working here.

The first step is to download and install docker compose.  Here I am using and Ubuntu system.

sudo pip install -U docker-compose

There are a few docker-compose command line commands to be familiar with which we’ll get to next, but first I will post the sample docker-compose configuration file to test out your ELK stack.

kibana:

 build: /home/<user>/elk/kibana/4.0.0
 restart: always
 ports:
 - "5601:5601"
 links:
 - "elasticsearch:elasticsearch"

elasticsearch:

 build: /home/<user>/elk/elasticsearch/1.4.4
 restart: always
 ports:
 - "9200:9200"
 - "9300:9300"
 volumes:
 - "/data:/data"

logstash:

 build: /home/<user>/elk/logstash/logstash-1.5.0
 restart: always
 ports:
 - "4545:4545"
 - "4546:4546"

Most of the configuration is straight forward.  Here are the main commands to get everything stitched to gether and working.

  • docker-compose build (from the directory the docker-compose.yml file is in)
  • docker-compose up (to test the stack)
  • docker-compose kill (bring it down)

After you have all the issues ironed out building and running and the stack is stable with no errors on start you can start up the stack in detached mode.

  • docker-compose up -d

Additionally, you can look at the logs if something smells fishy.

  • docker-compose logs

Design considerations

One thing that readers might wonder about is the scalability of this setup.  This will scale up very easily but not out.  However, this should be able to handle up to 100k events/second on the Logstash end so there will be other bottlenecks before the components (ES and Logstash) fall down.  I haven’t pushed my own setup this far yet but have been able to get to around 30k/sec before Logstash dies, which I’m still investigating.  Even with that amount of activity and Logstash choking, ES and Kibana don’t get affected.

So if you use this as a guide for a production setup I would recommend that you use a decently sized server, at least 4 CPU, 8 GB memory and adjust the memory and cpu options for the Logstash component if you plan on throwing a lot of logs at it (>30k/s).  I will revisit once I have worked out all the performance issues with some best practices for making Logstash run more smoothly.

I would be interested in knowing how to scale this out if anybody has any recommendations but this setup should scale up decently at least for most scenarios.  I have not played around with ES sharding across hosts but I imagine that it wouldn’t be super complicated, especially using container volume mounts to store the data and indexes at the hose OS level.

Read More

CLI hotkey and navigation guide

I have been meaning to write this post for quite a while now but have always managed to forget.  I have been piecing together useful terminal shortcuts, commands and productivity tools since I started using Linux back in the day.  If you spend any amount of time in the terminal you should hopefully know about some of these tricks already but more importantly, if you’re like me, are always looking for ways to improve the efficiency of your bash workflow and making your life easier.

There are a few things that I would quickly like to note.  If you use tmux as your CLI session manager you may not be able to use some of the mentioned hotkeys to get around by default if you don’t have some settings turned on in your configuration file.

You can take a look at my custom .tmux.conf file if you’re interested in screen style bindings plus configuration for hotkeys.  If you simply want to add the option that turns on the correct hotkey bindings for your terminal, add this line to your ~/.tmux.conf file

set-window-option -g xterm-keys on

Also, if you are a Mac user, and don’t already know about it, I highly recommend checking out iTerm2.  Coming primarily from a Linux background the hotkey bindings in Mac OS X are a little bit different than what I am used to and were initially a challenge for me to get accustomed to.  The transition for me took a little bit but iTerm has definitely helped me out immensely, as well as a few other ticks learned along the way.  I really haven’t dug through all the options in iTerm but there are a huge number of options and customizations that can made.

The only thing I have been interested in so far is the navigation which I will highlight below.

Adjust iTerm keybindings – As I mentioned, I am used to using Linux keybinding so a natural fit for my purposes is the option key.  The first step is to disable the custom binding in the iTerm preferences.  To do this, click iTerm -> Preferences -> Profiles -> Keys and find the binding for option left arrow and option right arrow and remove them from the default profile.

Next, add the following to your global key bindings, iTerm -> Preferences -> Keys.

iterm2

 

 

 

Move left one word

  • Keyboard shortcut: ??
  • Action: Send Escape Sequence
  • Escape: b

Move right one word

  • Keyboard shortcut: ??
  • Action: Send Escape Sequence
  • Escape: f

Finally, it is also worth pointing out that I use zsh for my default shell.  There are some really nice additions that zsh offers over vanilla bash.  I recently ran across this blog post which has some awesome tips.  I have also written about switching to zsh here.  Anyway, here is the lis.  It will grow as I find more tips.

Basic navigation:

  • Ctrl-left/right arrow – Jump between words quickly.
  • Opt-left/right arrow – Custom iTerm binding for jumping between words quickly.
  • Alt-left/right arrow – Linux only.  Jump between words quickly.
  • Esc-b/f – Mac OS.  Similar to arrow keys, move between words quickly.
  • Alt-b – Linux only.  Jump back one word.  Handy with other hotkeys overridden.
  • Ctrl-a – Jump to the beginning of a line (doesn’t work with tmux mappings).
  • Ctrl-e – Jump to the end of a line.
  • End – SImilar to ctrl-e this will send your cursor to the end of the line.
  • Home – Similar to End, except jumps to the beginning of the line.

Intermediate navigation:

  • Ctrl-u – Copy entire command to clipboard.
  • Ctrl-y – Paste previously copied ctrl-u command in to the terminal.
  • Ctrl-w – Cut a word to the left of the cursor.
  • Alt-d – Cut after word after the cursor position

Advanced use:

  • Ctrl-x Ctrl-e – Zsh command.  Edit the current command in your $EDITOR, which for me is vim
  • Ctrl-r – Everybody hopefully should know this one.  It is basically recursive search history
  • Ctrl-k – Erase everything after the current cursor position.  Handy for long commands
  • !<command>:p – Print the last command
  • cd … – Zsh command.  This can be easily aliased but will jump up two directories
  • !$ – Quickly access the last argument of the last command.

Zsh tab completion

Tab completion with Zsh is awesome, it’s like bash completion on steroids.  I will attempt to highlight some of my favorite tab completion tricks with Zsh.

Directory shorthand – Say you need to get to a directory that is nested deeply.  You can use the first few characters that will uniquely match to that directory to navigate instead of typing out the whole command.  So for example, cd /u/lo/b will expand out to /usr/local/bin.

command specific history – This one comes in handy all the time.  If you need to grab a command that you don’t use very often you can user Ctrl-r to match the first part of the command and from there you can up arrow to locate the command you typed.

Spelling and case correction – Bash by default can get annoying if you have a long command typed out but somehow managed to typo part of the command.  In zsh this is (sometimes) corrected for you automatically when you <tab> to complete the command.  For example if you are changing dirs in to the ‘Documents’ directory you can type ‘cd ~/doc/’ and the correct location will be expanded for you.

This list will continue to grow as I find more handy shortcuts, hotkeys or generally other useful tips and tricks that I find in my day to day command line work.  I really want to build a similar list for things in Vim but my Vim skills are unfortunately lacking plus there is already some really nice documentation and guidance out there already.  If you are interested in writing up a Vim productivity post I would love to post it.  Likewise, if you have any other nice shortcuts or tips you think are worth mentioning, post them in the comments and I will try to get them added to the list.

Read More

boot2docker

Introduction to boot2docker

If you work on a Mac (or Windows) and use Docker then you probably have heard of boot2docker.  If you haven’t heard of it before, boot2docker is basically a super lightweight Linux VM that is designed to run Docker containers.  Unfortunately there is no support (yet) in Mac OS X or Windows kernels for running Docker natively so this lightweight VM must be used as an intermediary layer that allows the host Operating Systems to communicate with the Docker daemon running inside the VM.  This solution really is not that limiting once you get introduced to and become comfortable with boot2docker and how to work around some of the current limitations.

Because Docker itself is such a new piece of software, the ecosystem and surrounding environment is still expanding and growing rapidly.  As such, the tooling has not had a great deal of time to mature.  So with pretty much anything that’s new, especially in the software and Open Source world, there are definitely some nuances and some things to be aware of when working with boot2docker.

That being said, the boot2docker project bridges a gap and does a great job of bringing Docker to an otherwise incompatible platform as well as making it easy to use across platforms, which especially useful for furthering the adoption of Docker among Mac and Windows users.

When getting started with boot2docker, it is important to note that there are a few different things going on under the hood.

Components

The first component is VirtualBox.  If you are familiar with virtual machines, there’s pretty much nothing new here.  It is the underpinning of running the VM and is a common tool for creating and managing VM’s.  One important note here about VBox.  This is currently the key to make volume sharing work with boot2docker to allow a user to pass local directories and files in to containers using its shared folder implementation.  Unfortunately it has been pretty well documented that vboxsf (shared folders) have not great performance when compared to other solutions.  This is something that the boot2docker team is aware of and working on for future reference.  I have a workaround that I will outline below if anyone happens to hit some of these performance issues.

The next component is the VM.  This is a super light weight image based on Tiny Core Linux and the 3.16.4 Linux kernel with AUFS to leverage Docker.  Other than that there is pretty much nothing else to it.  The TCL image is about 27MB and boots up in about 5 seconds, making it very fast, which is nice to get going with quickly.  There are instructions on the boot2docker site for creating custom .iso’s if you are interested as well if you are’t interested in building your own customized TCL.

The final component is called boot2docker-cli, which is normally referred to as the management tool.  This tools does a lot of the magic to get your host talking to the VM with minimal interaction.  It is basically the glue or the duct tape that allows users to pass commands from a local shell in to the container and get Docker to do stuff.

Installation

It is pretty dead simple to get boot2docker set up and configured.  You can download everything in one shot from the links on their site http://boot2docker.io or you can install manually on OSX with brew and a few other tools.  I highly recommend the packaged installer, it is very straight forward and easy to follow and there is a good video depiction of the process on the boot2docker site.

If you choose to install everything with brew you can use the following commands as a reference.  Obviously it will be assumed that brew is already installed and set up on your OSX system.  The first step is to install boot2docker.

brew install boot2docker

You might need to install Virtualbox separately using this method, depending on whether or not you already have a good version of Virtualbox to use with boot2docker.

The following commands will assume you are starting from scratch and do not have VBox installed.

brew update
brew tap phinze/homebrew-cask
brew install brew-cask
brew cask install virtualbox

That is pretty much it for installation.

Usage

The boot2docker CLI is pretty straight forward to use.  There are a bunch of commands to help users interface with the boot2docker VM from the command line.  The most basic and simple usage to initialize and create a vanilla boot2docker VM can be done with the following command.

boot2docker init

This will pull down the correct image and get the environment set up.  Once the VM has been created (see the tricks sections for a bit of customization) you are ready to bring up the VM.

boot2docker start

This command will simply start up the boot2docker VM and run some behind the scenes  tasks to help make using the VM seamless.  Sometimes you will be asked to set ENV variables here, just go ahead and follow the instructions to add them.

There are a few other nice commands that help you interact with the boot2docker VM.  For example if you are having trouble communicating with the VM you can run the ip command to gather network information.

boot2docker ip

If the VM somehow gets shut off or you cannot access it you can check its status.

boot2docker status

Finally there is a nice help command that serves as a good guide for interacting with the VM in various ways.

boot2docker help

The commands listed in this section will for the most part cover 90% of interaction and usage of the boot2docker VM.   There is a little bit of advanced usage with the cli covered below in the tricks section.

Tricks

You can actually modify some of the default the behavior of your boot2docker VM by altering some of the underlying boot2docker configurations.  For example, boot2docker will look in $HOME/.boot2docker/profile for any custom settings you may have.  If you want to change any network settings, adjust memory or cpu or a number of settings, you would simply change the profile to reflect the updated changes.

You can also override the defaults when you create your boot2docker VM by passing some arguments in.  If you want to change the memory or disk size by default, you would run something like

boot2docker init --memory=4096 --disksize=60000

Notice the –disksize=60000.  Docker likes to take up a lot of disk space for some of its operations, so if you can afford to, I would very highly recommending that you adjust the default disk size for the VM to avoid any strange running out of disk issues.  Most Macbooks or Windows machines have plenty of extra resources and big disks so usually there isn’t a good reason to not leverage the extra horsepower for your VM.

Troubleshooting

One very useful command for gathering information about your boot2docker environment is the boot2docker config command.  This command will give you all the basic information about the running config.  This can be very valuable when you are trying to troubleshoot different types of errors.

If you are familiar with boot2docker already you might have noticed that it isn’t a perfect solution and there are some weird quirks and nuances.  For example, if you put your host machine to sleep while the boot2docker VM is still running and then attempt to run things in Docker you may get some quirky results or things just won’t work.  This is due to the time skew that occurs when you put the machine to sleep and wake it up again, you can check the github issue for details.  You can quickly check if the boot2docker VM is out of sync with this command.

date -u; boot2docker ssh date -u

If you notice that the times don’t match up then you know to update your time settings.  The best fix for now that I have found for now is to basically reset the time settings by wrapping the following commands in to a script.

#!/bin/sh
 
# Fix NTP/Time
boot2docker ssh -- sudo killall -9 ntpd
boot2docker ssh -- sudo ntpclient -s -h pool.ntp.org
boot2docker ssh -- sudo ntpd -p pool.ntp.org

For about 95% of the time skew issues you can simply run sudo ntpclient -s -h pool.ntp.org to take care of the issue.

Another interesting boot2docker oddity is that sometimes you will not be able to connect to the Docker daemon or will sometimes receive other strange errors.  Usually this indicates that the environment variables that get set by boot2docker have disappeared,  if you close your terminal window or something similar for example.  Both of the following errors indicate the issue.

dial unix /var/run/docker.sock: no such file or directory

or

Cannot connect to the Docker daemon. Is 'docker -d' running on this host?

The solution is to either add the ENV variables back in to the terminal session by hand or just as easily modify your bashrc config file to read the values in when the terminal loads up.  Here are the variables that need to be reset, or appended to your bashrc.

export DOCKER_CERT_PATH=/Users/jmreicha/.boot2docker/certs/boot2docker-vm
export DOCKER_TLS_VERIFY=1
export DOCKER_HOST=tcp://192.168.59.103:2376

Assuming your boot2docker VM has an address of 192.168.59.103 and a port of 2376 for communication.

Shared folders

This has been my biggest gripe so far with boot2docker as I’m sure it has been for others as well.  Mostly I am upset that vboxsf are horrible and in all fairness the boot2docker people have been awesome getting this far to get things working with vboxsf as of release 1.3.  Another caveat to note if you aren’t aware is that currently, if you pass volumes to docker with “-v”, the directory you share must be located within the “/Users” directory on OSX.  Obviously not a huge issue but something to be aware if you happen to have problems with volume sharing.

The main issue with vboxsf is that it does not do any sort of caching sort of caching so when you are attempting to share a large amount of small files (big git repo’s) or anything that is filesystem read heavy (grunt) performance becomes a factor.  I have been exploring different workarounds because of this limitation but have not found very many that I could convince our developers to use.  Others have had luck by creating a container that runs SMB or have been able to share a host directory in to the boot2docker vm with sshfs but I have not had great success with these options.  If anybody has these working please let me know I’d love to see how to get them working.

The best solution I have come up with so far is using vagrant with a customized version of boot2docker with NFS support enabled, which has very little “hacking” to get working which is nice.  And a good enough selling point for me is the speed increase by using NFS instead of vboxsf, it’s pretty staggering actually.

This is the project that I have been using https://vagrantcloud.com/yungsang/boxes/boot2docker.  Thanks to @yungsang for putting this project together.  Basically it uses a custom vagrant-box based off of the boot2docker iso to accomplish its folder sharing with the awesome customization that Vagrant provides.

To get this workaround to work, grab the vagrantfile from the link provided above and put that in to the location you would like to run Vagrant from.  The magic sauce in the volume sharing is in this line.

onfig.vm.synced_folder ".", "/vagrant", type: "nfs"

Which tells Vagrant to share your current directory in to the boot2docker VM in the /vagrant directory, using NFS.  I would suggest modifying the default CPU and memory as well if your machine is beefy enough.

v.cpus = 4
v.memory = 4096

After you make your adjustments, you just need to spin up the yungsang version of boot2docker and jump in to the VM.

vagrant up
vagrant ssh

From within the VM you can run your docker commands just like you normally would.  Ports get forwarded through to your local machine like magic and everybody is happy.

Read More

devopsdays chicago

DevOpsDays Chicago

As a first timer to this event, and first timer to any devopsdays event I’d like to write up a quick summary of the event and write about a few of the key takeaways that I got from the event.  For anybody that isn’t familiar, the devopsdays events are basically 2 day events spread out through the year at different locations around the world.  You can find more information on their site here:

http://devopsdays.org/

The nice thing about the devopsdays events is that they are small enough that they are very affordable ($100) if you register early.  One unique thing about the events is their format, which I really liked.  Basically the first half of the day is split up in to talks given by various leaders in the industry followed by “Ignite” talks which are very brief but informative talks on a certain subject, followed by “open spaces”, which are pretty much open group discussions about topics suggested by event participants.  I spoke to a number of individuals that enjoyed the open spaces, even though they didn’t like all of the subjects covered in the talks.  So I thought there was a very nice balance to the format of the conference and how everything was laid out.

I noticed that a number of the talks focused on broad cultural topics as well as a few technical subjects as well.  Even if you don’t like the format or topics you will more than likely at least learn a few things from speakers or participants that will help you moving forward in your career.  Obviously you will get more out of a conference if you are more involved so go out of your way to introduce yourself and try to talk to as many people as you can.  The hallway tracks are really good to introduce yourself and to meet people.

So much to learn

One thing that really stood out to me was how different the composition and background of attendees really was.  I met people from gigantic organizations all the way down to small startup companies and basically everything in between.  The balance and mixture of attendees was really cool to see and it was great to get some different perspective on different topics.

Another fact that really stood out to me was that the topics covered were really well balanced, although some may disagree.  I thought heading in to the conference that most of the talks were going to be super technical in nature but it turned out a lot these talks revolved more around the concepts and ideas that sort of drive the DevOps movement rather than just tools that are associated with DevOps.

One massive takeaway that I got from the conference was that DevOps is really just a buzzword.  The definition that I Have of DevOps at my organization may be totally different than somebody else’s definition at a different company.  What is important is that even though there will be differences in implementation at different scales, a lot of the underlying concepts and ideas will be similar and can be used to drive change and improve processes as well as efficiency.

DevOps is really not just about a specific tool or set of tools you may use to get something accomplished, it is more complicated than that.  DevOps is about solving a problem or set of problems first and foremost, the tooling to do these tasks is secondary.  Before this event I had these two distinguishing traits of DevOps backwards, I thought I could drive change with tools, but now I understand that it is much more important to drive the change of culture first and then to retool you environment once you have the buy in to do so.

Talk to people

One of the more underrated aspects of this conference (and any conference for that matter) is the amount of knowledge you can pick up from the hallway track.  The hallway track is basically just a way to talk to people that you may or may not have met yet who are doing interesting work or have solved problems that you are trying to solve.  I ran in to a few people who were working on some interesting challenges *cough* docker *cough* in the hallway track and I really got a chance for the first time to see what others are doing with Docker which I thought was really cool.

Open spaces were another nice way to get people to intermingle.  The open spaces allowed people working on similar issues to put heads together to discuss specific topics that attendees either found interesting or were actively working on.  A lot of good discussion occurred in these open spaces and a good amount of knowledge was spread around.

Conclusion

DevOps is not one thing.  It is not a set of tools but rather a shift in thinking and therefore involves various cultural aspects, which can get very complicated.  I think in the years to come, as DevOps evolves, a lot more of these aspects will become much more clear and will hopefully make it easier people to get involved in embracing the changes that come along with the DevOps mentality.

Current thought leaders in the DevOps space (many of them in attendance at devopsdays Chichago) are doing a great job of moving the discussion forward and there are some awesome discussions at the devopsdays events.  Podcasts like Arrested DevOps, The Ship Show and DevOps Cafe are definitely creating a lot of good discussion around the subject as well.  Judging from my observations from the event, it seems there is still a lot of work to be done before DevOps becomes more common and mainstream.

Read More