Category Archives: Career

Thoughts on Working Remotely

Time Management

I’d like to share a few nuggets that I have learned so far in my experience working as a remote employee.  I have been working from home for around a year and a half and have learned some lessons in my experience thus far.  While I absolutely recommend trying the remote option if possible, there are a few things that are important to know.

Working remotely is definitely not for everybody.  In order to be an effective remote employee you have to have a certain amount of discipline, internal drive and self motivation.  Additionally, you need to be a good communicator (covered below).  If you have trouble staying on task or finding things to do at work or even have issues working by yourself in an isolated environment, you will quickly discover that working remotely may be more stressful than working in an office where you get the daily interactions and guidance from others.


That being said, I feel that in most cases, the positives outweigh the negatives.  Below are a few of the biggest benefits that I have discovered.

  • There is little to no commute.  Long commutes, especially in big cities create a certain amount of stress and strain that you simply don’t have to deal with when working from home.  As a bonus you save some cash on gas and miles of wear and tear on your vehicle.
  • It is easier to avoid distractions.  This of course depends on how you handle your work but if you are disciplined it becomes much easier to get work done with less distractions.  At home, if you manage to separate home from work (more on that topic below) then you don’t need to worry about random people stopping over to your desk to shoot the shit or bother you.  By avoiding simple distractions you can become much more productive in shorter periods of time.
  • No dress code.  This is a surprisingly simple but powerful bonus to working from home.  Having a criteria for dress code was actually stressful for me in previous jobs.  I always disagreed with having a dress code and didn’t understand why I couldn’t wear a t-shirt and jeans to work.  Now that I can wear whatever I want I feel more comfortable and more relaxed which leads to better productivity.
  • Schedule can be more flexible.  I can pick my own hours for the most part.  Obviously it is best to get in to a routine of working the same hours each day but if something comes up I can step out for a few hours and just make the hours up in the evening most in most cases and it won’t be a big deal to coworkers.  This flexibility is a great perk to working remotely and it allows you much more time to yourself when needed because you aren’t restricted to a set schedule.

Achieving a Work/life balance

Maintaining a balance between life at home and life at work can get very blurry when working from home as a telecommuter.  I would argue that finding a balance between personal life and work is the number one most important thing to work towards when making the transition from an on site employment because it directly leads to your happiness (or sorrow), which in turn influences all other aspects of your life, including activities and relationships outside of work.

It is super easy to get in to the habit of “always being around” and working extra and often time crazy hours when you are at home.  One thing that has helped in my own experience to improve the work/life balance and alleviate this always working thing is by creating routines.

I try to start work and end work at the same time of the day each day during the week. Likewise, I make a point to take breaks throughout the day to break up the time.  A few things I like to do are take a 30ish minute walk around the same time every day and I also have a coffee ritual in the morning that always precedes work time.  These daily cues help me get in to the flow of the day and to get my day started the same way every day.

Another mechanism I have discovered to help cope with the work hours is to leave work at work.  Find a way to create clear distinctions between home and work, either by creating an office at home where work stays or consider finding a coffee shop or co-working space.  As a side note, I have found 2-3 days working at a coffee shop/co-working space to be the best middle ground for me, but everybody is different so if you are new to remote work you will need to experiment.  That way you can have a place that represents what a workplace should be, and you when you leave that place, the work stays there.  It is very important to separate home from work if you don’t have a clear distinction between the two.

Some folks mention that it can get lonely.  I definitely agree with this sentiment.  On the up side, working in this type of environment can sort of force you to find ways to interact with people.  It can feel uncomfortable at first, but finding social activities will help alleviate the loneliness.  Coffee shops and co-working spaces are a great place to start.  I find that working in an environment with others helps mix things up and having the extra interaction really helps feeling like you are a part of a community.  These environments are a great solution if you are introverted and have a hard time getting out and meeting people.

Regardless of what exactly you do, it is absolutely critical to get out of your house.  This should be a no brainer but I can’t stress the importance enough.  Even if you’re just taking walks or going to the store, you need to make sure that you find things to do to get out of the house.  I have found some things that work but it is something again that you will need to experiment with.

If you are ambitious then I suggest getting involved in some other communities outside of work.  Meeting new people (outside of a work environment) is a very powerful tool in managing your work/life balance.  Obviously this advice works as well in more scenario’s than working remotely but I think it becomes much more important.  If you want some ideas for ways to get out or communities to join, feel free to email or comment and I can let you know what has worked for me.


Another important piece of the social aspect that I have discovered is that it is VERY important to have many open communication channels with coworkers.  Google Hangouts, Slack, Screenhero, WebEx, Skype, email, IRC and any other collaboration tools you can find are super important for communicating with coworkers and for building relationships and culture in distributed work environments.  In my experience, if you are working as part of a team and aren’t a great communicator, relationships with coworkers can quickly become strained.

Also, having regular meetings with key members of your team is important.  A nice once a week check in with any managers is a good starting point.  It helps you keep track of what you’re doing and it helps others on your team understand the type of work you’re doing so you’re not as isolated.  Gaining the trust of your coworkers is always very important.


The most difficult balance to achieve when transitioning to a work from home opportunity for me, was maintaining a good work/life balance.  You are 100% responsible for how you choose to spend your time so it becomes important to make the right decisions when it comes to how to prioritize.

For example, one thing I have struggled with is how to work the right amount of time.  There was a stretch where I was working 12-14 days just because I kept finding more and more things to do.  While that is good for your employer, it is not good for you or anyone around you.  The work will always be there, so you have to find strategies to help you step away from work when you have put in enough hours for the day.

Everybody is different so if you are new to telecommuting/working remotely I encourage you to experiment with different techniques for managing your work/life balance.  While I feel that working remotely is for the most part a bonus, it still has its own set of issues so please be careful and don’t work too much, and especially don’t expend extra energy or get too stressed out about things you can’t control.

DevOps Conferences


I did a post quite awhile ago that highlighted some of the cooler system admin and operations oriented conferences that I had on my radar at that time.  Since then I have changed jobs and am now currently in a DevOps oriented position, so I’d like to revisit the subject and update that list to reflect some of the cool conferences that are in the DevOps space.

I’d like to start off by saying first that even if you can’t make it to the bigger conferences, local groups and meet ups are also an excellent way to get out and meet other professionals that do what you do. Local groups are also an excellent way to stay in the loop on what’s current and also learn about what others are doing.  If you are interested in eventually becoming a presenter or speaker, local meet ups and groups can be a great way to get started.  There are numerous opportunities and communities (especially in bigger cities), check here for information or to see if there is a DevOps meet up near you.  If there is nothing near by, start one!  If you can’t find any DevOps groups look for Linux groups or developer groups and network from there, DevOps is beginning to become popular in broader circles.

After you get your feet wet with meet ups, the next place to start looking is conferences that sound like they might be interesting to you.  There are about a million different opportunities to choose from, from security conferences, developer conferences, server and network conferences, all the way down the line.  I am sticking with strictly DevOps related conferences because that is currently what I am interested and know the best.

Feel free to comment if I missed any conferences that you think should be on this list.

DevOps Days (Multiple dates)

Perhaps the most DevOps centric of all the conference list.  These conferences are a great way to meet with fellow DevOps professionals and network with them.  The space and industry is changing constantly and being on top of all of the changes is crucial to being successful.  Another nice thing about the DevOps days is that they are spread out around the country (and world) and spread out throughout the year so they are very accessible.  WARNING:  DevOps days are not tied to any one set of DevOps tools but rather the principles and techniques and how to apply them to different environments.  If you are looking for super in depth technical talks, this one may not be for you.

ChefConf (March)

The main Chef conference.  There are large conferences for the main configuration management tools but I chose to highlight Chef because that’s what we use at my job.  There are lots of good talks that have a Chef centered theme but also are great because the practices can be applied with other tools.  For example, there are many DevOps themes at ChefConf including continuous integration and deployment topics, how to scale environments, tying different tools together and just general configuration management techniques.  Highly recommend for Chef users, feel free to substitute the other big configuration management tool conferences here if Chef isn’t your cup of tea (Salt, Puppet, Ansible).

CoreOS Fest (May)

  • 2015 videos haven’t been posted yet

Admittedly, this is a much smaller and niche conference but is still awesome.  The conference is the first one put on by the folks at CoreOS and was designed to help the community keep up with what is going on in the CoreOS and container world.  The venue is pretty small but the content at this years conference was very good.  There were some epic announcements and talks at this years conference, including Tectonic announcements and Kubernetes deep dives, so if container technology is something you’re interested in then this conference would definitely be worth checking out.

Velocity (May)

This one just popped up on my DevOps conference radar.  I have been hearing good things about this conference for awhile now but have not had the opportunity to go to it.  It always has interesting speakers and topics and a number of the DevOps thought leaders show up for this event.  One cool thing about this conference is that there are a variety of different topics at any one time so it offers a nice, wide spectrum of information.  For example, there are technical tracks covering different areas of DevOps.

DockerCon (June)

Docker has been growing at a crazy pace so this seems like the big conference to go check out if you are in the container space.  This conference is similar to CoreOS fest but focuses more heavily on topics of Docker (obviously).  I haven’t had a chance to go to one of these yet but containers and Docker have so much momentum it is very difficult to avoid.  As well, many people believe that container technologies are going to be the path to the future so it is a good idea to be as close to the action as you can.

Monitorama (June)

This is one of the coolest conferences I think, but that is probably just because I am so obsessed with monitoring and metrics collection.  Monitoring seems to be one of those topics that isn’t always fun to deal with or work around but talks and technologies at this conference actually make me excited about monitoring.  To most, monitoring is a necessary evil and a lot of the content from this conference can help make your life easier and better in all aspects of monitoring, from new trends and tools to topics on how to correctly monitor and scale infrastructures.  Talks can be technical but well worth it, if monitoring is something that interests you.

AWS Re:Invent (November)

This one is a monster.  This is the big conference that AWS puts on every year to announce new products and technologies that they have been working on as well as provide some incredibly helpful technical talks.  I believe this conference is one of the pricier and more exclusive conferences but offers a lot in the way of content and details.  This conference offers some of the best, most technical topics of discussion that I have seen and has been invaluable as a learning resource.  All of the videos from the conference are posted on YouTube so you can get access to this information for free.  Obviously the content is related to AWS but I have found this to be a great way to learn.


Even if you don’t have a lot of time to travel or get out to these conferences, nearly all of them post video from the event so you can watch it whenever you want to.  This is an INCREDIBLE learning tool and resource that is FREE.  The only downside to the videos is that you can’t ask any questions, but it is easy to find the presenters contact info if you are interested and feel like reaching out.

That being said, you tend to get a lot more out of attending the conference.  The main benefit of going to conferences over watching the videos alone is that you get to meet and talk to others in the space and get a feel for what everybody else is doing as well as check out many cool tools that you might otherwise never hear about.  At every conference I attend, I always learn about some new tech that others are using that I have never heard of that is incredibly useful and I always run in to interesting people that I would otherwise not have the opportunity to meet.

So definitely if you can, get out to these conferences, meet and talk to people, and get as much out of them as you can.  If you can’t make it, check out the videos afterwards for some really great nuggets of information, they are a great way to keep your skills sharp and current.

If you have any more conferences to add to this list I would be happy to update it!  I am always looking for new conferences and DevOps related events.

Useful tools for Docker development

docker developer tools

Docker is still a young project, and as such the ecosystem around it hasn’t quite matured to the point that many people feel quite comfortable using it at this point.  It is nice to have such a fast growing set of tools, however the downside to all of this is that many of the tools are not production ready.  I think as the ecosystem solidifies and Docker adoption grows we will see a healthy set of solid, production ready tools that are built off of the current generation of tools.

Once you get introduced to the concepts and ideas behind Docker you quickly realize the power and potential that it holds.   Inevitably though, there comes a “now what?” moment where you basically realize that Docker can do some interesting things but get stuck because there are barriers to simply dropping Docker into a production environment.

One problem is that you can’t simply “turn on” Docker in your environment, so you need tools to help manage images and containers, manage orchestration, development, etc.  So there are a number of challenged to take Docker and start doing useful and interesting things with it once you get past the introductory novelty of building an image and deploying simple containers.

I will attempt to make sense of the current state of Docker and to help take some of the guesswork out of which tools to use in which situations and scenarios for those that are hesitant to adopt Docker.  This post will focus mostly around the development aspects of the Docker ecosystem because that is a nice gateway to working with and getting acquainted with Docker.


As you may be aware, Docker does not (yet) support MacOSX or Windows.  This can definitely be a hindrance for adopting and building Docker acceptance amongst developers.  Boot2Docker massively simplifies this issue by essentially creating a sandbox to work with Docker as a thin layer between Docker and Mac (or Windows) via the boot2docker VM.

You can check it out here, but essentially you will download a package and install it and you are ready to start hacking away on Docker on your Mac.  Definitely a must for Mac OS as well as Windows users that are looking to begin their Docker journey, because the complexity is completely removed.

Behind the scenes, a number of things get abstracted away and simplified with Boot2docker, like setting up SSH keys, managing network interfaces, setting up VM integrations and guest additions, etc.  Boot2docker also bundles together with a cli for managing the VM that manages docker so it is easy to manage and configure the VM from the terminal.


It would take many blog posts to try to describe everything that CoreOS and its tooling can do.  The reason I am mentioning it here is because CoreOS is one of those core building blocks that are recently becoming necessary in any Docker environment.  Docker as it is today, is not specifically designed for distributed workloads and as such doesn’t provide much of the tooling around how to solve challenges that accompany distributed systems.  However, CoreOS bridges this gap very well.

CoreOS is a minimal Linux distribution that aims to help with a number of Docker related tasks and challenges.  It is distributed by its design so can do some really interesting things with images and containers using etcd, systemd, fleetd, confd and others as the platform continues to evolve.

Because of this tooling and philosophy, CoreOS machines can be rebooted on the fly without interrupting services and clustered processes across machines.  This means that maintenance can occur whenever and wherever, which makes the resiliency factor very high for CoreOS servers.

Another highlight is its security model, which is a push based model.  For example, instead of manually updating servers with security patches, the CoreOS maintainers periodically push updates to servers, alleviating the need to update all the time.  This was very nice when the latest shellshock vulnerability was released because within a day or so, a patch was automatically pushed to all CoreOS servers, automating the otherwise tedious process of updating servers, especially without config management tools.


Fig is a must have for anybody that works with Docker on a regular basis, ie developers.  Fig allows you to define your environment in a simple YAML config file and then bring up an entire development environment in one command, with fig up.

Fig works very well for a development work flow because you can rapidly prototype and test how Docker images will work together and eliminate issues that might crop up without being able to test things so easily.  For example if you are working on an application stack you can simply define how the different containers should work and interact together from the fig file.

The downside to fig is that in its current form, it isn’t really equipped to deal with distributed Docker hosts, something that you will find a large number of projects are attempting to solve and simplify.  This shouldn’t be an issue though, if you are aware of its limitations beforehand and know that there are some workloads that fig is not built for.


This is a cool project out of Century Link Labs that aims to solve problems around docker app development and orchestration.  Panamax is similar to Fig in that it stitches containers together logically but is slightly different in  a few regards.  First, Panamax builds off of CoreOS to leverage some of its built in tools, etcd, fleetd, etc.  Another thing to note is that currently Panamax only supports single host deployments.  The creators of Panamax have stated that clustered support and multi host tenancy is in the works but for now you will have to use Panamax on a single host.

Panamax simplifies Docker images and application orchestration (kind of), in the background and additionally places a nice layer of abstraction on top of this process so that managing the Docker image “stack” becomes even easier, through a slick GUI.  With the GUI you can set environment variables, link containers together, bind ports and volumes.

Panamax draws a number of its concepts from Fig.  It uses templates as the underlying way to compose containers and applications, which is similar to the Fig config files, as both use YAML files to compose and orchestrate Docker container behavior.  Another cool thing about Panamax is that there is a public template repo for getting different application and container stacks up and running, so the community participation is a really nice aspect of the project.

If setting up a command line config file isn’t an ideal solution in your environment, this tool is definitely worth a look.  Panamax is a great way to quickly develop and prototype Docker containers and applications.


This is a very young but interesting project.  The project looks interesting because of the way that it handles and deals volume management.  Right now one of the biggest challenges to widespread Docker adoption is exactly the problem that Flocker solves in its ability to persist storage across distributed hosts.

From their github page:

Flocker is a data volume manager and multi-host Docker cluster management tool. With it you can control your data using the same tools you use for your stateless applications by harnessing the power of ZFS on Linux.

Basically, Flocker is using some ZFS magic behind the scenes to allow volumes to float between servers, to allow for persistent storage across machines and containers.  That is a huge win for building distributed systems that require persisnt data and storage, eg databases.

Definitely keep an eye on this project for improvements and look for them to push this area in the future.  The creators have said it isn’t production ready just quite yet but is  a great tool to use in a test or staging environment.


Flynn touts itself as a Platform as a Service (PasS) built on Docker, in a very similar vein to Heroku.  Having a Docker PaaS is a huge win for developers because it simplifies developer workflow.  There are some great benefits of having a PaaS in your environment, the subject could easily expand to be its own topic of conversation.

The approach that Flynn takes (and Paas in general) is that operations should be a product team.  With Flynn, ops can provide the platform and developers can focus on their tasks much more easily, developing software, testing and generally freeing developers the time to focus on development tasks instead of fighting operations.  Flynn does a nice job of decoupling operations tasks from dev tasks so that the developers don’t need to rely on operations to do their work and operations don’t need to concern themselves with development tasks which can cause friction and create efficiency issues.

Flynn works by basically tying a number of different tools together created specifically to solve challenges of building a PaaS to perform their workloads via Docker (scheduling, persistent storage, orchestration, clustering, etc) as one single entity.

Currently its developers state that Flynn is not quite suitable for production use yet, but it is still mature enough to use and play around with and even deploy apps to.


Deis is another PaaS for Docker, aiming to solve the same problems and challenges that Flynn does, so there is definitely some overlap in the projects, as far as end users are concerned.  There is a nice CLI tool for manaing and intereacting with Deis and it offers much of the same functionality that either heroku or Flynn offer.  Deis can do things like horizontal application scaling, supports many different application frameworks and is Open Source.

Deis is similar in concept to Flynn in that it aims to solve PaaS challenges but they are quite different in their implementation and how they actually achieve their goals.

Both Flynn and Deis aim to create platforms to build Docker apps on top of but do so in somewhat different means.  As the creator Deis explains, Deis is very much more practical in its approach to solving PaaS issues because it is basically taking a number of available technologies and tools that have already been created and is fitting them together only creating the pieces that are missing,  while Flynn seems to be very much more ambitious in its approach due to the fact that it is implementing a number of its own tooling and solutions, including its own scheduler, registration service, etc and only relying on a few tools that are already in existence.  For example, while Flynn does all of these different things, Deis leverages CoreOS to do many of the tasks it needs to operate and work correctly while minimally bolting on tooling that it needs to function correctly.


As the Docker ecosystem continues to evolve, more and more options seem to be sprouting up.  There are already a number of great tools in the space but as the community continues to evolve I believe that the current tools will continue to improve and new and useful tools will be built for Docker specific workloads.  It is really cool to see how the Docker ecosystem is growing and how the tools and technologies are disrupting traditional views on a number of areas in tech including virtualization, DevOps, development, deployments and application development, among others.

I anticipate the adoption of Docker to continue growing for the foreseeable future as the core Docker project continues to improve and stabilize as well as the tools tools built around it that I have discussed here.  It will be interesting to see where things are even six months from now in regards to the adoption and use cases that Docker has created.

DevOpsDays Chicago

devopsdays chicago

As a first timer to this event, and first timer to any devopsdays event I’d like to write up a quick summary of the event and write about a few of the key takeaways that I got from the event.  For anybody that isn’t familiar, the devopsdays events are basically 2 day events spread out through the year at different locations around the world.  You can find more information on their site here:

The nice thing about the devopsdays events is that they are small enough that they are very affordable ($100) if you register early.  One unique thing about the events is their format, which I really liked.  Basically the first half of the day is split up in to talks given by various leaders in the industry followed by “Ignite” talks which are very brief but informative talks on a certain subject, followed by “open spaces”, which are pretty much open group discussions about topics suggested by event participants.  I spoke to a number of individuals that enjoyed the open spaces, even though they didn’t like all of the subjects covered in the talks.  So I thought there was a very nice balance to the format of the conference and how everything was laid out.

I noticed that a number of the talks focused on broad cultural topics as well as a few technical subjects as well.  Even if you don’t like the format or topics you will more than likely at least learn a few things from speakers or participants that will help you moving forward in your career.  Obviously you will get more out of a conference if you are more involved so go out of your way to introduce yourself and try to talk to as many people as you can.  The hallway tracks are really good to introduce yourself and to meet people.

So much to learn

One thing that really stood out to me was how different the composition and background of attendees really was.  I met people from gigantic organizations all the way down to small startup companies and basically everything in between.  The balance and mixture of attendees was really cool to see and it was great to get some different perspective on different topics.

Another fact that really stood out to me was that the topics covered were really well balanced, although some may disagree.  I thought heading in to the conference that most of the talks were going to be super technical in nature but it turned out a lot these talks revolved more around the concepts and ideas that sort of drive the DevOps movement rather than just tools that are associated with DevOps.

One massive takeaway that I got from the conference was that DevOps is really just a buzzword.  The definition that I Have of DevOps at my organization may be totally different than somebody else’s definition at a different company.  What is important is that even though there will be differences in implementation at different scales, a lot of the underlying concepts and ideas will be similar and can be used to drive change and improve processes as well as efficiency.

DevOps is really not just about a specific tool or set of tools you may use to get something accomplished, it is more complicated than that.  DevOps is about solving a problem or set of problems first and foremost, the tooling to do these tasks is secondary.  Before this event I had these two distinguishing traits of DevOps backwards, I thought I could drive change with tools, but now I understand that it is much more important to drive the change of culture first and then to retool you environment once you have the buy in to do so.

Talk to people

One of the more underrated aspects of this conference (and any conference for that matter) is the amount of knowledge you can pick up from the hallway track.  The hallway track is basically just a way to talk to people that you may or may not have met yet who are doing interesting work or have solved problems that you are trying to solve.  I ran in to a few people who were working on some interesting challenges *cough* docker *cough* in the hallway track and I really got a chance for the first time to see what others are doing with Docker which I thought was really cool.

Open spaces were another nice way to get people to intermingle.  The open spaces allowed people working on similar issues to put heads together to discuss specific topics that attendees either found interesting or were actively working on.  A lot of good discussion occurred in these open spaces and a good amount of knowledge was spread around.


DevOps is not one thing.  It is not a set of tools but rather a shift in thinking and therefore involves various cultural aspects, which can get very complicated.  I think in the years to come, as DevOps evolves, a lot more of these aspects will become much more clear and will hopefully make it easier people to get involved in embracing the changes that come along with the DevOps mentality.

Current thought leaders in the DevOps space (many of them in attendance at devopsdays Chichago) are doing a great job of moving the discussion forward and there are some awesome discussions at the devopsdays events.  Podcasts like Arrested DevOps, The Ship Show and DevOps Cafe are definitely creating a lot of good discussion around the subject as well.  Judging from my observations from the event, it seems there is still a lot of work to be done before DevOps becomes more common and mainstream.

Analyzing cloud costs

Knowing about and controlling the costs of a cloud environment is not only good to know how to do as an admin/engineer, it also greatly helps others inside your organization.  Knowing your environment and cost overhead also makes you (or your team) look better when you can pinpoint bottlenecks, as well as anomalies in your environment, and create solutions to mitigating costs or otherwise track cloud resource utilization.  Plus, it can even get you some extra credit.

So with this in mind, I’d like to talk about a few strategies and tools I have been experimenting with to help road map and accurately model different costs and utilization for different workloads spread out accross an AWS environment.


The first tool I’d like to mention is ICE and is probably my favorite tool. It is a tool developed by Netflix and analyzes costs across your AWS infrastructure.  It gives you nice graphs and advanced breakdowns of prices, including spot pricing vs on demand and many other permutations across your AWS infrastructure.

This is the best explanation I can find, pulled right from their github page:

The ability to trend usage patterns on a global scale, yet decompose them down to a region, availability zone, or service team provides incredible flexibility. Ice allows us to quantify our AWS footprint and to make educated decisions regarding reservation purchases and reallocation of resources.

Amazon ICE

It has a nice interface and some slick filtering, so breaking things down on a region by region level becomes easy, which is otherwise not the case for the other tools.  This tool is also great for spotting trends and anomalies in your environment which can sometimes go undetected if not viewed in the correct context.

The downside is the overhead associated with getting this up and running bu there is a Chef cookbook that will pretty much do the installation for you, if you are comfortable with Chef.  You will need to override some attributes but otherwise it is pretty straight forward.  If you need assistance let me know and I’d be glad to walk you through getting it set up.

AWS Calculator

This is a handy tool to help ballpark and model various costs for AWS services.  One disappointing discovery of this tool is that it doesn’t help model spot instance prices.

AWS calculator

This is great for mocking out what the TCO of a server or group of servers might look like.  It is also good for getting a general feel for what different server costs will be for a certain number of months and/or years.

Be sure to check this out to help stay current on the most recent news because AWS moves quickly with seemingly constant updates and have been dropping prices steadily over the past 3 years.  Especially with the increased competition from Microsoft (Azure) and Google (Google Cloud), AWS seems to be constantly slashing prices and adding new improvement and features to their product.

AWS Billing and Cost Management

This one is pretty self explanatory.  It is built right in to AWS and as such, it can be a very powerful tool that can easily be overlooked.  It offers a variety of detailed information about costs and billing.  It offers some nice graphs and charts for trend spotting and can be exported for analysis, which is also nice (even though I haven’t got that far yet).

The major downside (in my opinion) is that you can’t get the granular price breakdowns that are available with a tool like ICE.  For example, there isn’t an easy way to find a price comparison breakdown for cost per region or other more detailed information.

Trusted Advisor

This tool is great and is free for basic usage.  This offering from AWS is really nice for helping to find and optimize settings according to a number of good practice recommendations created by Amazon.  Not only does it give you some really nice price breakdowns but it also reports things like security and performance which can be equally useful.  Use this often to tighten up areas of your infrastructure and to optimize costs.

One down for this one is that to unlock all of the features and functionality you need to upgrade to the enterprise version which is obviously more expensive.

AWS ELK Billing

I just found out about this one but it looks like it might be a very nice solution, leveraging the Logstash + Kibana stack.  I have written a post about getting started with the ELK stack so it shouldn’t be difficult at all to begin playing around with this solution if you are interested.

If you get this tool up and working I would love to hear about it.

Cost saving tips

I have compiled a list of simple yet powerful tips to help control costs in AWS.  Ideally a combination of all of these tips would be used to help control costs.

  • Upgrade server and service instance generations as often as possible for automatic improved performance and reduced price.  For example gen 1 to gen 3, ->
  • Try to size servers correctly by keeping them busy.  Servers that are running but aren’t doing anything are essentially wasting money.  Either run them according to time of day or bump up the amount of utilization per box, either by downsizing the server or upping the workload.
  • On that note, size servers correctly according to workload.  For example a workload that demands CPU cylces should not be deployed as a memory optimized server.
  • Adopt on demand instances and utilize them early on.  On demand prices are significantly lower than standard prices.  Just be careful because your on demand instances can disappear.
  • In the same ilk of on demand instances, use reserved instances.  These instance types can significantly reduce prices, and have the advantage that they won’t disappear so long running servers and services benefit from this type of cost control.
  • Set up granular billing as early as possible.  Create and optimize alerts based on expected usage for tighter control of costs.  It’s better to start off knowing and controlling environment costs sooner than later.
  • Delete unused EBS volumes.  Servers and volumes can come and go, but often times EBS volumes can become orphaned and essentially no good.  Therefore it is a good idea to clean up unused EBS volumes whenever you can.  Of course this process can and should be automated.


Managing cost and optimizing your cloud infrastructure really could be considered its own discipline in some regards.  Environments can become complex quite quickly with instances, services and resources spinning up and down as well as dynamically growing up and down to accommodate workloads as well as ever evolving environments can lead to what some call “Cloud Sprawl”.

The combination of the tools and cost savings tips mentioned above can be a real lifesaver when you are looking to squeeze out the most bang for your buck out of your cloud environment.  It can also lead to a much more solid understanding of all the moving pieces in your environment and can help determine exactly is going on at any given time, which is especially useful for DevOps admins and engineers.

If you have any other cool tips or tips for controlling AWS costs or other cloud environment costs let me know, I’ll be sure to add them here!