docker developer tools

Useful tools for Docker development

Docker is still a young project, and as such the ecosystem around it hasn’t quite matured to the point that many people feel quite comfortable using it at this point.  It is nice to have such a fast growing set of tools, however the downside to all of this is that many of the tools are not production ready.  I think as the ecosystem solidifies and Docker adoption grows we will see a healthy set of solid, production ready tools that are built off of the current generation of tools.

Once you get introduced to the concepts and ideas behind Docker you quickly realize the power and potential that it holds.   Inevitably though, there comes a “now what?” moment where you basically realize that Docker can do some interesting things but get stuck because there are barriers to simply dropping Docker into a production environment.

One problem is that you can’t simply “turn on” Docker in your environment, so you need tools to help manage images and containers, manage orchestration, development, etc.  So there are a number of challenged to take Docker and start doing useful and interesting things with it once you get past the introductory novelty of building an image and deploying simple containers.

I will attempt to make sense of the current state of Docker and to help take some of the guesswork out of which tools to use in which situations and scenarios for those that are hesitant to adopt Docker.  This post will focus mostly around the development aspects of the Docker ecosystem because that is a nice gateway to working with and getting acquainted with Docker.

Boot2Docker

As you may be aware, Docker does not (yet) support MacOSX or Windows.  This can definitely be a hindrance for adopting and building Docker acceptance amongst developers.  Boot2Docker massively simplifies this issue by essentially creating a sandbox to work with Docker as a thin layer between Docker and Mac (or Windows) via the boot2docker VM.

You can check it out here, but essentially you will download a package and install it and you are ready to start hacking away on Docker on your Mac.  Definitely a must for Mac OS as well as Windows users that are looking to begin their Docker journey, because the complexity is completely removed.

Behind the scenes, a number of things get abstracted away and simplified with Boot2docker, like setting up SSH keys, managing network interfaces, setting up VM integrations and guest additions, etc.  Boot2docker also bundles together with a cli for managing the VM that manages docker so it is easy to manage and configure the VM from the terminal.

CoreOS

It would take many blog posts to try to describe everything that CoreOS and its tooling can do.  The reason I am mentioning it here is because CoreOS is one of those core building blocks that are recently becoming necessary in any Docker environment.  Docker as it is today, is not specifically designed for distributed workloads and as such doesn’t provide much of the tooling around how to solve challenges that accompany distributed systems.  However, CoreOS bridges this gap very well.

CoreOS is a minimal Linux distribution that aims to help with a number of Docker related tasks and challenges.  It is distributed by its design so can do some really interesting things with images and containers using etcd, systemd, fleetd, confd and others as the platform continues to evolve.

Because of this tooling and philosophy, CoreOS machines can be rebooted on the fly without interrupting services and clustered processes across machines.  This means that maintenance can occur whenever and wherever, which makes the resiliency factor very high for CoreOS servers.

Another highlight is its security model, which is a push based model.  For example, instead of manually updating servers with security patches, the CoreOS maintainers periodically push updates to servers, alleviating the need to update all the time.  This was very nice when the latest shellshock vulnerability was released because within a day or so, a patch was automatically pushed to all CoreOS servers, automating the otherwise tedious process of updating servers, especially without config management tools.

Fig

Fig is a must have for anybody that works with Docker on a regular basis, ie developers.  Fig allows you to define your environment in a simple YAML config file and then bring up an entire development environment in one command, with fig up.

Fig works very well for a development work flow because you can rapidly prototype and test how Docker images will work together and eliminate issues that might crop up without being able to test things so easily.  For example if you are working on an application stack you can simply define how the different containers should work and interact together from the fig file.

The downside to fig is that in its current form, it isn’t really equipped to deal with distributed Docker hosts, something that you will find a large number of projects are attempting to solve and simplify.  This shouldn’t be an issue though, if you are aware of its limitations beforehand and know that there are some workloads that fig is not built for.

Panamax

This is a cool project out of Century Link Labs that aims to solve problems around docker app development and orchestration.  Panamax is similar to Fig in that it stitches containers together logically but is slightly different in  a few regards.  First, Panamax builds off of CoreOS to leverage some of its built in tools, etcd, fleetd, etc.  Another thing to note is that currently Panamax only supports single host deployments.  The creators of Panamax have stated that clustered support and multi host tenancy is in the works but for now you will have to use Panamax on a single host.

Panamax simplifies Docker images and application orchestration (kind of), in the background and additionally places a nice layer of abstraction on top of this process so that managing the Docker image “stack” becomes even easier, through a slick GUI.  With the GUI you can set environment variables, link containers together, bind ports and volumes.

Panamax draws a number of its concepts from Fig.  It uses templates as the underlying way to compose containers and applications, which is similar to the Fig config files, as both use YAML files to compose and orchestrate Docker container behavior.  Another cool thing about Panamax is that there is a public template repo for getting different application and container stacks up and running, so the community participation is a really nice aspect of the project.

If setting up a command line config file isn’t an ideal solution in your environment, this tool is definitely worth a look.  Panamax is a great way to quickly develop and prototype Docker containers and applications.

Flocker

This is a very young but interesting project.  The project looks interesting because of the way that it handles and deals volume management.  Right now one of the biggest challenges to widespread Docker adoption is exactly the problem that Flocker solves in its ability to persist storage across distributed hosts.

From their github page:

Flocker is a data volume manager and multi-host Docker cluster management tool. With it you can control your data using the same tools you use for your stateless applications by harnessing the power of ZFS on Linux.

Basically, Flocker is using some ZFS magic behind the scenes to allow volumes to float between servers, to allow for persistent storage across machines and containers.  That is a huge win for building distributed systems that require persisnt data and storage, eg databases.

Definitely keep an eye on this project for improvements and look for them to push this area in the future.  The creators have said it isn’t production ready just quite yet but is  a great tool to use in a test or staging environment.

Flynn

Flynn touts itself as a Platform as a Service (PasS) built on Docker, in a very similar vein to Heroku.  Having a Docker PaaS is a huge win for developers because it simplifies developer workflow.  There are some great benefits of having a PaaS in your environment, the subject could easily expand to be its own topic of conversation.

The approach that Flynn takes (and Paas in general) is that operations should be a product team.  With Flynn, ops can provide the platform and developers can focus on their tasks much more easily, developing software, testing and generally freeing developers the time to focus on development tasks instead of fighting operations.  Flynn does a nice job of decoupling operations tasks from dev tasks so that the developers don’t need to rely on operations to do their work and operations don’t need to concern themselves with development tasks which can cause friction and create efficiency issues.

Flynn works by basically tying a number of different tools together created specifically to solve challenges of building a PaaS to perform their workloads via Docker (scheduling, persistent storage, orchestration, clustering, etc) as one single entity.

Currently its developers state that Flynn is not quite suitable for production use yet, but it is still mature enough to use and play around with and even deploy apps to.

Deis

Deis is another PaaS for Docker, aiming to solve the same problems and challenges that Flynn does, so there is definitely some overlap in the projects, as far as end users are concerned.  There is a nice CLI tool for manaing and intereacting with Deis and it offers much of the same functionality that either heroku or Flynn offer.  Deis can do things like horizontal application scaling, supports many different application frameworks and is Open Source.

Deis is similar in concept to Flynn in that it aims to solve PaaS challenges but they are quite different in their implementation and how they actually achieve their goals.

Both Flynn and Deis aim to create platforms to build Docker apps on top of but do so in somewhat different means.  As the creator Deis explains, Deis is very much more practical in its approach to solving PaaS issues because it is basically taking a number of available technologies and tools that have already been created and is fitting them together only creating the pieces that are missing,  while Flynn seems to be very much more ambitious in its approach due to the fact that it is implementing a number of its own tooling and solutions, including its own scheduler, registration service, etc and only relying on a few tools that are already in existence.  For example, while Flynn does all of these different things, Deis leverages CoreOS to do many of the tasks it needs to operate and work correctly while minimally bolting on tooling that it needs to function correctly.

Conclusion

As the Docker ecosystem continues to evolve, more and more options seem to be sprouting up.  There are already a number of great tools in the space but as the community continues to evolve I believe that the current tools will continue to improve and new and useful tools will be built for Docker specific workloads.  It is really cool to see how the Docker ecosystem is growing and how the tools and technologies are disrupting traditional views on a number of areas in tech including virtualization, DevOps, development, deployments and application development, among others.

I anticipate the adoption of Docker to continue growing for the foreseeable future as the core Docker project continues to improve and stabilize as well as the tools tools built around it that I have discussed here.  It will be interesting to see where things are even six months from now in regards to the adoption and use cases that Docker has created.

Read More

devopsdays chicago

DevOpsDays Chicago

As a first timer to this event, and first timer to any devopsdays event I’d like to write up a quick summary of the event and write about a few of the key takeaways that I got from the event.  For anybody that isn’t familiar, the devopsdays events are basically 2 day events spread out through the year at different locations around the world.  You can find more information on their site here:

http://devopsdays.org/

The nice thing about the devopsdays events is that they are small enough that they are very affordable ($100) if you register early.  One unique thing about the events is their format, which I really liked.  Basically the first half of the day is split up in to talks given by various leaders in the industry followed by “Ignite” talks which are very brief but informative talks on a certain subject, followed by “open spaces”, which are pretty much open group discussions about topics suggested by event participants.  I spoke to a number of individuals that enjoyed the open spaces, even though they didn’t like all of the subjects covered in the talks.  So I thought there was a very nice balance to the format of the conference and how everything was laid out.

I noticed that a number of the talks focused on broad cultural topics as well as a few technical subjects as well.  Even if you don’t like the format or topics you will more than likely at least learn a few things from speakers or participants that will help you moving forward in your career.  Obviously you will get more out of a conference if you are more involved so go out of your way to introduce yourself and try to talk to as many people as you can.  The hallway tracks are really good to introduce yourself and to meet people.

So much to learn

One thing that really stood out to me was how different the composition and background of attendees really was.  I met people from gigantic organizations all the way down to small startup companies and basically everything in between.  The balance and mixture of attendees was really cool to see and it was great to get some different perspective on different topics.

Another fact that really stood out to me was that the topics covered were really well balanced, although some may disagree.  I thought heading in to the conference that most of the talks were going to be super technical in nature but it turned out a lot these talks revolved more around the concepts and ideas that sort of drive the DevOps movement rather than just tools that are associated with DevOps.

One massive takeaway that I got from the conference was that DevOps is really just a buzzword.  The definition that I Have of DevOps at my organization may be totally different than somebody else’s definition at a different company.  What is important is that even though there will be differences in implementation at different scales, a lot of the underlying concepts and ideas will be similar and can be used to drive change and improve processes as well as efficiency.

DevOps is really not just about a specific tool or set of tools you may use to get something accomplished, it is more complicated than that.  DevOps is about solving a problem or set of problems first and foremost, the tooling to do these tasks is secondary.  Before this event I had these two distinguishing traits of DevOps backwards, I thought I could drive change with tools, but now I understand that it is much more important to drive the change of culture first and then to retool you environment once you have the buy in to do so.

Talk to people

One of the more underrated aspects of this conference (and any conference for that matter) is the amount of knowledge you can pick up from the hallway track.  The hallway track is basically just a way to talk to people that you may or may not have met yet who are doing interesting work or have solved problems that you are trying to solve.  I ran in to a few people who were working on some interesting challenges *cough* docker *cough* in the hallway track and I really got a chance for the first time to see what others are doing with Docker which I thought was really cool.

Open spaces were another nice way to get people to intermingle.  The open spaces allowed people working on similar issues to put heads together to discuss specific topics that attendees either found interesting or were actively working on.  A lot of good discussion occurred in these open spaces and a good amount of knowledge was spread around.

Conclusion

DevOps is not one thing.  It is not a set of tools but rather a shift in thinking and therefore involves various cultural aspects, which can get very complicated.  I think in the years to come, as DevOps evolves, a lot more of these aspects will become much more clear and will hopefully make it easier people to get involved in embracing the changes that come along with the DevOps mentality.

Current thought leaders in the DevOps space (many of them in attendance at devopsdays Chichago) are doing a great job of moving the discussion forward and there are some awesome discussions at the devopsdays events.  Podcasts like Arrested DevOps, The Ship Show and DevOps Cafe are definitely creating a lot of good discussion around the subject as well.  Judging from my observations from the event, it seems there is still a lot of work to be done before DevOps becomes more common and mainstream.

Read More

Analyzing cloud costs

Knowing about and controlling the costs of a cloud environment is not only good to know how to do as an admin/engineer, it also greatly helps others inside your organization.  Knowing your environment and cost overhead also makes you (or your team) look better when you can pinpoint bottlenecks, as well as anomalies in your environment, and create solutions to mitigating costs or otherwise track cloud resource utilization.  Plus, it can even get you some extra credit.

So with this in mind, I’d like to talk about a few strategies and tools I have been experimenting with to help road map and accurately model different costs and utilization for different workloads spread out accross an AWS environment.

ICE

The first tool I’d like to mention is ICE and is probably my favorite tool. It is a tool developed by Netflix and analyzes costs across your AWS infrastructure.  It gives you nice graphs and advanced breakdowns of prices, including spot pricing vs on demand and many other permutations across your AWS infrastructure.

This is the best explanation I can find, pulled right from their github page:

The ability to trend usage patterns on a global scale, yet decompose them down to a region, availability zone, or service team provides incredible flexibility. Ice allows us to quantify our AWS footprint and to make educated decisions regarding reservation purchases and reallocation of resources.

Amazon ICE

It has a nice interface and some slick filtering, so breaking things down on a region by region level becomes easy, which is otherwise not the case for the other tools.  This tool is also great for spotting trends and anomalies in your environment which can sometimes go undetected if not viewed in the correct context.

The downside is the overhead associated with getting this up and running bu there is a Chef cookbook that will pretty much do the installation for you, if you are comfortable with Chef.  You will need to override some attributes but otherwise it is pretty straight forward.  If you need assistance let me know and I’d be glad to walk you through getting it set up.

AWS Calculator

This is a handy tool to help ballpark and model various costs for AWS services.  One disappointing discovery of this tool is that it doesn’t help model spot instance prices.

AWS calculator

This is great for mocking out what the TCO of a server or group of servers might look like.  It is also good for getting a general feel for what different server costs will be for a certain number of months and/or years.

Be sure to check this out to help stay current on the most recent news because AWS moves quickly with seemingly constant updates and have been dropping prices steadily over the past 3 years.  Especially with the increased competition from Microsoft (Azure) and Google (Google Cloud), AWS seems to be constantly slashing prices and adding new improvement and features to their product.

AWS Billing and Cost Management

This one is pretty self explanatory.  It is built right in to AWS and as such, it can be a very powerful tool that can easily be overlooked.  It offers a variety of detailed information about costs and billing.  It offers some nice graphs and charts for trend spotting and can be exported for analysis, which is also nice (even though I haven’t got that far yet).

The major downside (in my opinion) is that you can’t get the granular price breakdowns that are available with a tool like ICE.  For example, there isn’t an easy way to find a price comparison breakdown for cost per region or other more detailed information.

Trusted Advisor

This tool is great and is free for basic usage.  This offering from AWS is really nice for helping to find and optimize settings according to a number of good practice recommendations created by Amazon.  Not only does it give you some really nice price breakdowns but it also reports things like security and performance which can be equally useful.  Use this often to tighten up areas of your infrastructure and to optimize costs.

One down for this one is that to unlock all of the features and functionality you need to upgrade to the enterprise version which is obviously more expensive.

AWS ELK Billing

I just found out about this one but it looks like it might be a very nice solution, leveraging the Logstash + Kibana stack.  I have written a post about getting started with the ELK stack so it shouldn’t be difficult at all to begin playing around with this solution if you are interested.

If you get this tool up and working I would love to hear about it.

Cost saving tips

I have compiled a list of simple yet powerful tips to help control costs in AWS.  Ideally a combination of all of these tips would be used to help control costs.

  • Upgrade server and service instance generations as often as possible for automatic improved performance and reduced price.  For example gen 1 to gen 3, m1.xxx -> m3.xxx.
  • Try to size servers correctly by keeping them busy.  Servers that are running but aren’t doing anything are essentially wasting money.  Either run them according to time of day or bump up the amount of utilization per box, either by downsizing the server or upping the workload.
  • On that note, size servers correctly according to workload.  For example a workload that demands CPU cylces should not be deployed as a memory optimized server.
  • Adopt on demand instances and utilize them early on.  On demand prices are significantly lower than standard prices.  Just be careful because your on demand instances can disappear.
  • In the same ilk of on demand instances, use reserved instances.  These instance types can significantly reduce prices, and have the advantage that they won’t disappear so long running servers and services benefit from this type of cost control.
  • Set up granular billing as early as possible.  Create and optimize alerts based on expected usage for tighter control of costs.  It’s better to start off knowing and controlling environment costs sooner than later.
  • Delete unused EBS volumes.  Servers and volumes can come and go, but often times EBS volumes can become orphaned and essentially no good.  Therefore it is a good idea to clean up unused EBS volumes whenever you can.  Of course this process can and should be automated.

Conclusion

Managing cost and optimizing your cloud infrastructure really could be considered its own discipline in some regards.  Environments can become complex quite quickly with instances, services and resources spinning up and down as well as dynamically growing up and down to accommodate workloads as well as ever evolving environments can lead to what some call “Cloud Sprawl”.

The combination of the tools and cost savings tips mentioned above can be a real lifesaver when you are looking to squeeze out the most bang for your buck out of your cloud environment.  It can also lead to a much more solid understanding of all the moving pieces in your environment and can help determine exactly is going on at any given time, which is especially useful for DevOps admins and engineers.

If you have any other cool tips or tips for controlling AWS costs or other cloud environment costs let me know, I’ll be sure to add them here!

Read More

Review: Webmin Administrator’s Cookbook

webmin cookbookI just recently finished reading the Webmin Administrator’s Cookbook and thought I would share some of my thoughts and opinions about the book.  While I don’t typically review books on the blog I thought this would be a good opportunity to discuss a nice book.  This book is written by a very knowledgeable and credible author – Michal Karzynksi.  His background includes over a decade of experience as a developer in various programming languages as well as a scientific research background.

This book isa good read for everyone from seasoned veterans and professionals all the way down to aspiring and freshly minted admins.

The book itself covers a broad, inclusive set of topics, including logging, user management, backups, web server administration and many others.  The basic theme of the book uses the Webmin tool as a sort of framework to discuss and cover various administrative topics and tasks within the Webmin tool.  From their website, Webmin is described as follows:

Webmin is a web-based interface for system administration for Unix. Using any modern web browser, you can setup user accounts, Apache, DNS, file sharing and much more.

This works out to be a perfect tool for aspiring sysadmins because it really does a nice job of cloaking a lot of the nitty gritty complexity and detail that can be overwhelming and confusing for new admins or users that are new or unfamiliar to the concepts and tooling that Webmin covers.  By using Webmin, one can learn about a large number of interesting topics without having to worry about how to type in all of the commands or how to install/configure the tools that come bundled up in Webmin.  This allows users to really increase their productivity.  Couple the Webmin tool with a cookbook of nice concrete examples and you have a great recipe for learning how to use a powerful tool correctly.

Wrapping such a broad spectrum of topics and tools into a web based tool can be a complicated.  But used as a reference material this book does a great job of making everything clear with good examples both of explaining how everything works together, as well as pictorial examples that really do a nice job of tying the written concepts together with concrete, real world usage.  Now is also a good time to mention that this book follows a nice pattern of organizing topics.  From the outset, the book starts with the more basic administrative topics and principles, covering each topic thoroughly with good description and solid examples.  The book progresses quite nicely through the different topics and eventually gets into and covers some of the more obscure topics.

The Webmin Administrator’s Cookbook does a nice job of combining many complex system administration topics into a nice, easy to follow and read reference guide that can be utilized by all different levels of Linux and administrative experience.  If you use Webmin in any capacity at all, this book would be a great reference and guide to help you be more productive in your day to day with this tool.

You can find more information about the book here.  While you are at it, check out the author, Michal Karzynski’s blog for more interesting and useful tips – http://michal.karzynski.pl.

Read More

Podcasts for DevOps admins

podcastGetting up to speed in a fast moving environment forces you to think about things in a different way, which for me was/is an interesting sort of paradigm shift.  Moving from enterprise to start up I have found things to be much different and so embracing the DevOps philosophy and culture has been a main focus of mine through this transition, in a good way of course.  Today I’d like to share some interesting resources that I have found to be immensely helpful in my journey thus far into the land of DevOps.  Hopefully readers are in the same position that I am in and can use this information in their own DevOps journey.

In my experience I have found that podcasts are one of the absolute best ways to consume information, whether it be on a morning commute or viewing the show live, good podcasts are one of the best learning tools around.  So for today’s post, I have compiled a list of some good shows related to DevOps that I hope others find to be useful.

If you’re interested, I wrote a post awhile back focusing on some my favorite podcasts relating to system administration.   You can find the list and original Podcasts for System Administrators post here.

The Food Fight Show

From their website: “Food Fight is a bi-weekly podcast for the Chef community. We bring together the smartest people in the Chef community and the broader DevOps world to discuss the thorniest issues in system administration.”  This show offers some great conversation in topics around DevOps, a lot of really in depth technical discussion from industry experts as well as some great interviews with various contributors to the DevOps community.  This right now is my favorite DevOps podcast and there are a large number of episodes to choose from, so you can hand pick a few episodes to try out if you are skeptical.

DevOps Cafe

This show takes a similar round table format similar to the style of The Food Fight Show.  This show is co-hosted by Damon Edwards and John Willis which covers a lot of cool news and interesting topics on the bleeding edge of the DevOps world.  There is is a nice variety of interesting guests as well as relevant topics of discussion.  I like this show because for me, it does a great job of focusing in on the more relevant aspects of DevOps, rather than the abstract concepts and ideas behind DevOps.  To me, it is more practice than theory.  That might be a horrible description so you’ll just have to go check out the podcast to find out for yourself.

Arrested DevOps

This podcast is in much the same vein as The Food Fight Show, where DevOps pro’s sit down and discuss issues related to what is going on in the DevOps world.  I just started listening to this podcast as it is one of the newer additions to my DevOps podcast scene.  This show definitely has a lot of potential; the hosts are knowledgeable, the guests are smart and the topics of conversation are interesting.  Trevor and Matt do a good job of mixing technical discussion with some of the more DevOps type topics and ideas, I would definitely taking a look at this podcast.

Ops All The Things

Another new kid on the block, this show is hosted by Chris Webber and Steven Murawski of Stack Exchange fame.  The focus of the show is geared towards system administration, operations and DevOps.  I like this podcast because it does a good job of blending DevOps with system administration, which is the track that I followed into the world of DevOps.  Much of the show is geared towards nuts and bolts administration which is nice.  Topics are often in depth and technical, with discussions revolving around things like configuration management, monitoring, revision control, etc.  One other nice feature is that it covers some administration topics related to Windows which I think gives listeners a good perspective.

The Ship Show

I haven’t had a chance to dive too deep into this one much yet but judging from the few episodes I’ve been able to listen to, this show definitely captures a lot of relevant and interesting issues in the community.   Once I get more episodes under my belt and get a better feel for the show I will update the post.  But just to give readers an idea, this is from their bio:  “The Ship Show is a twice-monthly podcast, featuring discussion on everything from build engineering to DevOps to release management, plus interviews, new tools and techniques, and reviews.”

If I missed any or if you’re interested in starting a DevOps oriented podcast let me know and I will be sure to add you to the list and help spread the word.  I think it’s important for people in the community to help out and share their knowledge.

Read More