Useful Vim Plugins

This post is mostly a reference for folks that are interested in adding a little bit of extra polish and functionality to the stock version of Vim.  The plugin system in Vim is a little bit confusing at first but is really powerful once you get past the initial learning curve.  I know this topic has been covered a million times but having a centralized reference for how to set up each plugin is a little bit harder to find.

Below I have highlighted a sample list of my favorite Vim plugins.  I suggest that you go try as many plugins that you can to figure out what suits your needs and workflow best.  The following plugins are the most useful to me, but certainly I don’t think will be the best for everybody so use this post as a reference to getting started with plugins and try some out to decide which ones are the best for your own environment.

Vundle

This is a package manager of sorts for Vim plugins.  Vundle allows you to download, install, search and otherwise manage plugins for Vim in an easy and straight forward way.

To get started with Vundle, put the following configuration at THE VERY TOP of your vimrc.

set nocompatible              " be iMproved, required
filetype off                  " required
set rtp+=~/.vim/bundle/Vundle.vim
call vundle#rc()
"" let Vundle manage Vundle
Bundle 'gmarik/Vundle.vim'
...

Then you need to clone the Vundle project in to the path specified in the vimrc from above.

git clone https://github.com/gmarik/Vundle.vim.git ~/.vim/bundle/Vundle.vim

Now you can install any other defined plugins from within Vim by  running :BundleInstall.  This should trigger Vundle to start downloading/updating its list of plugins based on your vimrc.

To install additional plugins, update your vimrc with the plugins you want to install, similar to how Vundle installs itself as shown below.

"" Example plugin
Bundle 'flazz/vim-colorschemes'

Color Schemes

Customizing the look and feel of Vim is a very personal experience.  Luckily there are a lot of options to choose from.

The vim-colorschemes plugin allows you to pick from a huge list of custom color schemes that users have put together and published.  As illustrated above you can simply add the repo to your vimrc to gain access to a large number of color options.  Then to pick one just add the following to your vimrc (after the Bundle command).

colorscheme xoria256

Next time you open up Vim you should see color output for the scheme you like.

Syntastic

Syntastic is a fantastic syntax highlighter and linting tool and is easily the best syntax checker I have found for Vim.  Syntastic offers support for tons of different languages and styles and even offers support for third party syntax checking plugins.

Here is how to install and configure Syntastic using Vundle.  The first step is to ddd Syntastic to your vimrc,

" Syntax highlighting
 Bundle 'scrooloose/syntastic'

There are a few basic settings that also need to get added to your vimrc to get Syntastic to work well.

" Syntastic statusline
 set statusline+=%#warningmsg#
 set statusline+=%{SyntasticStatuslineFlag()}
 set statusline+=%*
 " Sytnastic settings
 let g:syntastic_always_populate_loc_list = 1
 let g:syntastic_auto_loc_list = 1
 let g:syntastic_check_on_open = 1
 let g:syntastic_loc_list_height=5
 let g:syntastic_check_on_wq = 0
 " Better symbols
 let g:syntastic_error_symbol = 'XX'
 let g:syntastic_warning_symbol = '!!'

That’s pretty much it.  Having a syntax highlighter and automatic code linter has been a wonderful boon for productivity.  I have saved  myself so much time chasing down syntax errors and other bad code.  I definitely recommend this tool.

YouCompleteMe

This plugin is an autocompletion tool that adds tab completion to Vim, giving it a really nice IDE feel.  I’ve only tested YCM out for a few weeks now but have to say it doesn’t seem to slow anything down very much at all, which is nice.  An added bonus to using YCM with Syntastic is that they work together so if there are problems with the functions entered by YCM, Syntastic will pick them up.

Here are the installation instructions for Vundle.  The first thing you will need to do is add a Vundle reference to your vimrc.

"" Autocomplete
Bundle 'Valloric/YouCompleteMe'

Then, in Vim, run :BundleInstall – this will download the git repo for YouCompleteMe.  Once the repo is downloaded you will need a few other tools installed to get things working correctly.  Check the official documentation for installation instruction for your system.  On OS X you will need to have Python, cmake, MacVim and clang support.

xcode-select --install
brew install cmake

Then, to install YouCompleteMe.

cd ~/.vim/bundle/YouCompleteMe
git submodule update --init --recursive (not needed if you use Vundle)
./install.py --clang-completer

vim-better-whitespace

Highlights pesky whitespace automatically.  This one is really useful to just have on in the background to help you catch whitespace mistakes.  I know I make a lot of mistakes with regards to missing whitespace so having this is just really nice.

To install it.

"" Whitespace highlighting
Bundle 'ntpeters/vim-better-whitespace'

That’s it.  Vundle should handle the rest.

ctrlp / nerdtree

These tools are useful for file management and traversal.  These plugins become more powersful when you work with a lot of files and move around different directories a lot.  There is some debate about whether or not to use nerdtree in favor of the built in netrw.  Nonetheless, it is still worth checking out different file browsers and see how they work.

Check out Vim Unite for a sort of hybrid file manager for fuzzy finding like ctrlp with additional functionality, like the ability to grep files from within Vim using a mapped key.

Bonus – Shellcheck

This is a shell and bash linting tool that integrates with vim and is great.  Bash is notoriously difficult to read and debug and the shellcheck tools helps out with that a lot.

Install shellcheck on your system and syntastic will automatically pick up the installation and automatically do its linting whenever you save a file.  I have been writing a lot of bash lately and the shellcheck tool has been a godsend for catching mistakes, and especially useful in Vim since it runs all the time.

By combining the powers of a good syntax highlighter and a good solid understanding of Bash you should be able to be that much more productive once you get used to having a build in to syntax and style checker for your scripts.

Read More

Dockerizing Sentry

I have created a Github project that has basic instructions for getting started.  You can take a look over there for ideas of how all of this works and to get ideas for your own set up.

I used the following links as reference for my approach to Dockerizing Sentry.

https://registry.hub.docker.com/u/slafs/sentry
https://github.com/rchampourlier/docker-sentry

If you have configurations to use, it is probably a good idea to start from there.  You can check my Github repo for what a basic configuration looks like.  If you are starting from scratch or are using version 7.1.x or above you can use the “sentry init” command to generate a skeleton configuration to work from.

For this setup to work you will need the following prebuilt Docker images/containers. I suggest using something simple like docker-compose to stitch the containers together.

  • redis – https://registry.hub.docker.com/_/redis/
  • postgres – https://registry.hub.docker.com/_/postgres/
  • memcached – https://hub.docker.com/_/memcached/
  • nginx – https://hub.docker.com/_/nginx/

NOTE: If you are running this on OS X you may need to do some trickery and give special permission on the host (mac) level e.g. create ~/docker/postgres directory and give it the correct permission (I just used 777 recursively for testing, make sure to lock it down if you put this in production).

I wrote a little script in my Github project that will take care of setting up all of the directories on the host OS that need to be set up for data to persist.  The script also generates a self signed cert to use for proxying Sentry through Nginx.  Without the certificate, the statistics pages in the Sentry web interface will be broken.

To run the script, run the following command and follow the prompts.  Also make sure you have docker-compose installed beforehand to run all the needed command.

sudo ./setup.sh

The certs that get generated are self signed so you will see the red lock in your browser.  I haven’t tried it yet but I imagine using Let’s Encrytpt to create the certificates would be very easy.  Let me know if you have had any success generating Nginx certs for Docker containers, I might write a follow up post.

Preparing Postgres

After setting up directories and creating certificates, the first thing necessary to getting up and going is to add the Sentry superuser to Postgres (at least 9.4).  To do this, you will need to fire up the Postgres container.

docker-compose up -d postgres

Then to connect to the Postgres DB you can use the following command.

docker-compose run postgres sh -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'

Once you are logged in to the Postgres container you will need to set up a few Sentry DB related things.

First, create the role.

CREATE ROLE sentry superuser;

And then allow it to login.

ALTER ROLE sentry WITH LOGIN;

Create the Sentry DB.

CREATE DATABASE sentry;

When you are done in the container, \q will drop out of the postgresql shell.

After you’re done configuring the DB components you will need to “prime” Sentry by running it a first time.  This will probably take a little bit of time because it also requires you to build and pull all the other needed Docker images.

docker-compose build
docker-compose up

You will quickly notice if you try to browse to the Sentry URL (e.g. the IP/port of your Sentry container or docker-machine IP if you’re on OS X) that you will get errors in the logs and 503’s if you hit the site.

Repair the database (if needed)

To fix this you will need to run the following command on your DB to repair it if this is the first time you have run through the set up.

docker-compose run sentry sentry upgrade

The default Postgres database username and password is sentry in this setup, as part of the setup the upgrade prompt will ask you got create a new user and password, and make note of what those are.  You will definitely want to change these configs if you use this outside of a test or development environment.

After upgrading/preparing the database, you should be able to bring up the stack again.

docker-compose up -d && docker-compose logs

Now you should be able to get to the Sentry URL and start configuring .  To manage the username/password you can visit the /admin url and set up the accounts.

 

Next steps

The Sentry server should come up and allow you in but will likely need more configuration.  Using the power of docker-compose it is easy to add in any custom configurations you have.  For example, if you need to adjust sentry level configurations all you need to do is edit the file in ./sentry/sentry.conf.py and then restart the stack to pick up the changes.  Likewise, if you need to make changes to Nginx or celery, just edit the configuration file and bump the stack – using “docker-compose up -d”.

I have attempted to configure as many sane defaults in the base config to make the configuration steps easier.  You will probably want to check some of the following settings in the sentry/sentry.conf.py file.

  • SENTRY_ADMIN_EMAIL – For notifications
  • SENTRY_URL_PREFIX – This is especially important for getting stats working
  • SENTRY_ALLOW_ORIGIN – Where to allow communications from
  • ALLOWED_HOSTS – Which hosts can communicate with Sentry

If you have the SENTRY_URL_PREFIX set up correctly you should see something similar when you visit the /queue page, which indicates statistics are working.

Sentry Queue

If you want to set up any kind of email alerting, make sure to check out the mail server settings.

docker-compose.yml example file

The following configuration shows how the Sentry stack should look.  The meat of the logic is in this configuration but since docker-compose is so flexible, you can modify this to use any custom commands, different ports or any other configurations you may need to make Sentry work in your own environment.

# Caching
redis:
  image: redis:2.8
  hostname: redis
  ports:
    - "6379:6379"
   volumes:
     - "/data/redis:/data"

memcached:
  image: memcached
  hostname: memcached
  ports:
    - "11211:11211"

# Database
postgres:
  image: postgres:9.4
  hostname: postgres
  ports:
    - "5432:5432"
  volumes:
    - "/data/postgres/etc:/etc/postgresql"
    - "/data/postgres/log:/var/log/postgresql"
    - "/data/postgres/lib/data:/var/lib/postgresql/data"

# Customized Sentry configuration
sentry:
  build: ./sentry
  hostname: sentry
  ports:
    - "9000:9000"
    - "9001:9001"
  links:
    - postgres
    - redis
    - celery
    - memcached
  volumes:
    - "./sentry/sentry.conf.py:/home/sentry/.sentry/sentry.conf.py"


# Celery
celery:
  build: ./sentry
  hostname: celery
  environment:
    - C_FORCE_ROOT=true
  command: "sentry celery worker -B -l WARNING"
  links:
    - postgres
    - redis
    - memcached
  volumes:
    - "./sentry/sentry.conf.py:/home/sentry/.sentry/sentry.conf.py"

# Celerybeat
celerybeat:
  build: ./sentry
  hostname: celerybeat
  environment:
    - C_FORCE_ROOT=true
  command: "sentry celery beat -l WARNING"
  links:
    - postgres
    - redis
  volumes:
    - "./sentry/sentry.conf.py:/home/sentry/.sentry/sentry.conf.py"

# Nginx
nginx:
  image: nginx
  hostname: nginx
  ports:
    - "80:80"
    - "443:443"
  links:
    - sentry
  volumes:
    - "./nginx/sentry.conf:/etc/nginx/conf.d/default.conf"
    - "./nginx/sentry.crt:/etc/nginx/ssl/sentry.crt"
    - "./nginx/sentry.key:/etc/nginx/ssl/sentry.key"

The Dockerfiles for each of these component are fairly straight forward.  In fact, the same configs can be used for the Sentry, Celery and Celerybeat services.

Sentry

# Kombu breaks in 2.7.11
FROM python:2.7.10

# Set up sentry user
RUN groupadd sentry && useradd --create-home --home-dir /home/sentry -g sentry sentry
WORKDIR /home/sentry

# Sentry dependencies
RUN pip install \
 psycopg2 \
 mysql-python \
 supervisor \
 # Threading
 gevent \
 eventlet \
 # Memcached
 python-memcached \
 # Redis
 redis \
 hiredis \
 nydus

# Sentry
ENV SENTRY_VERSION 7.7.4
RUN pip install sentry==$SENTRY_VERSION

# Set up directories
RUN mkdir -p /home/sentry/.sentry \
 && chown -R sentry:sentry /home/sentry/.sentry \
 && chown -R sentry /var/log

# Configs
COPY sentry.conf.py /home/sentry/.sentry/sentry.conf.py

#USER sentry
EXPOSE 9000/tcp 9001/udp

# Making sentry commands easier to run
RUN ln -s /home/sentry/.sentry /root

CMD sentry --config=/home/sentry/.sentry/sentry.conf.py start

Since the customized Sentry config is rather lengthy, I will point you to the Github repo again.  There are a few values that you will need to provide but they should be pretty self explanatory.

Once the configs have all been put in to place you should be good to go.  A bonus piece would be to add an Upstart service that takes care of managing the stack if the server either gets rebooted or the containers manage to get stuck in an unstable state.  The configuration is a fairly easy thing to do and many other guides and posts have been written about how to accomplish this.

Read More

Enable SSL for your WordPress blog

Updated: 11/18/16

The Let’s Encrypt client was recently renamed to “certbot”.  I have updated the post to use the correct name but if I miss something use certbot or let me know.

With the announcement of the public beta of the Let’s Encrypt project, it is now nearly trivial to get your site set up with an SSL certificate.  One of the best parts about the Let’s Encrypt project is that it is totally free, so there is pretty much no reason to protect your blog set up with an SSL certificate.  The other nice part of Let’s Encrypt is that it is very easy to get your certificate issued.

The first step to get started is grabbing the latest source code from GitHub for the project.  Log on to your WordPress server (I’m running Ubuntu) and clone the repo.  Make sure to install git if you haven’t already.

git clone https://github.com/letsencrypt/certbot.git

There is a shell script you can run to pretty much do everything for you, including installation of any packages and libraries it needs as well as configures paths and other components it needs to work.

cd certbot
./certbot-auto

After the bootstrap is done there should be some CLI options.  Run the command with the -h flag to print out help.

./certbot-auto -h

Since I am using Apache for my blog I will use the “–apache” option.

./certbot-auto --apache

There will be some prompts you need to go through for setting up the certificates and account creation.

let's encrypt

 

 

 

 

 

This process is still somewhat error prone, so if you make a typo you can just rerun the “./letsencrypt-auto” command and follow the prompts.

The certificates will be dropped in to /etc/letsencrypt/live/<website>.  Go double check them if needed.

This process will also generate a new apache configuration file for you to use.  You can check for the file in /etc/apache2/site-enabled.  The import part of this config should look similar to the following:

<VirtualHost *:443>
  UseCanonicalName Off
  ServerAdmin webmaster@localhost
  DocumentRoot /var/www/wordpress
  SSLCertificateFile /etc/letsencrypt/live/thepracticalsysadmin.com/cert.pem
  SSLCertificateKeyFile /etc/letsencrypt/live/thepracticalsysadmin.com/privkey.pem
  Include /etc/letsencrypt/options-ssl-apache.conf
  SSLCertificateChainFile /etc/letsencrypt/live/thepracticalsysadmin.com/chain.pem
</VirtualHost>

As a side note, you will probably want to redirect non https requests to use the encrypted connection.  This is easy enough to do, just go find your .htaccess file (mine was in /var/www/wordpress/.htaccess) and add the following rules.

<IfModule mod_rewrite.c>
  RewriteEngine On
  RewriteCond %{SERVER_PORT} 80
  RewriteRule ^(.*)$ https://example.com/$1 [R,L]
</IfModule>

Before we restart Apache with the new configuration let’s run a quick configtest to make sure it all works as expected.

apachectl configtest

If everything looks okay in the configtest then you can reload or restart apache.

service apache2 restart

Now when you visit your site you should get the nice shiny green lock icon on the address bar.  It is important to remember that the certificates issued by the Let’s Encrypt project are valid for 90 days so you will need to make sure to keep up to date and generate new certificates every so often.  The Let’s Encrypt folks are working on automating this process but for now you will need to manually generate new certificates and reload your web server.

let's encrypt

 

 

 

 

 

 

 

 

 

 

 

 

 

 

That’s it.  Your site should now be functioning with SSL.

Updating the certificate automatically

To take this process one step further We can make a script that can be run via cron (or manually) to update the certificate.

Here’s what the script looks like.

#!/usr/bin/env bash

dir="/etc/letsencrypt/live/example.com"
acme_server="https://acme-v01.api.letsencrypt.org/directory"
domain="example.com"
https="--standalone-supported-challenges tls-sni-01"

# Using webroot method
#/root/letsencrypt/certbot-auto --renew certonly --server $acme_server -a webroot --webroot-path=$dir -d $domain --agree-tos

# Using standalone method
service apache2 stop
# Previously you had to specify options to renew the cert but this has been deprecated
#/root/letsencrypt/certbot-auto --renew certonly --standalone $https -d $domain --agree-tos
# In newer versions you can just use the renew command
/root/letsencrypt/certbot-auto renew --quiet
service apache start

Notice that I have the “webroot” method commented out.  I run a service (Varnish) on port 80 that proxies traffic but also interferes with LE so I chose to run the standalone renewal method.  It is pretty easy, the main difference is that you need to turn off Apache before you run it since Apache binds to to ports 80/443.  But the downtime is okay in my case.

I chose to put the script in to a cron job and have it run every 45 days so that I don’t have to worry about logging on to my server to regenerate the certificate.  Here’s what a sample crontab for this job might look like.

0 0 */45 * * /root/renew_cert.sh

This is a straight forward process and will help with your search engine juices as well.

Read More

ECS cluster turnup with CoreOS and Terraform

Recently I have been evaluating different container clustering tools and technologies.  It has been a fun experience thus far, the tools and community being built around Docker have come a long time since I last looked.  So for today’s post I’d like to go over ECS a little bit.

ECS is essentially the AWS version of container management.  ECS takes care of managing your Docker (container) infrastructure by handling creation, management, destruction and scheduling as well as providing API integration with other AWS services, which is really powerful.  To get ECS up and running all you need to do is create an ECS cluster, either from the AWS console or from some other AWS integration like the CLI or Terraform, then install the agent on servers that you would like ECS to schedule work on.  After setting up the agent and cluster name you are basically ready to go, start by creating a task and then create a service to start running containers on the cluster.  Some cool new features have been announced at this years re:Invent conference but I haven’t had a chance yet to look at them yet.

First impression of ECS

The best part about testing ECS by far has been how easy it is to get set up and running.  It took less than 20 minutes to go from nothing to fully functioning cluster that was scheduling containers to hosts and receiving load.  I think the most powerful aspect of ECS is its integration with other AWS services.  For example, if you need to attach containers/services to a load balancer, the AWS infrastructure is already there so the different pieces of the infrastructure really mesh well together.

The biggest downside so far is that the ECS console interface is still clunky.  It is functional, and I have been able to use it to do everything I have needed but it just feels like it needs some polish and things are nested in menu’s and usually not easy to find.  I’m sure there are plans to improve the interface and as mentioned above some new features were recently announced, so I have a feeling there will be some nice improvements on the way.

I haven’t tried the CLI tool yet but it looks promising for automating containers and services.

Setting things up

Since I am a big fan of CoreOS I decided to try turning up my ECS cluster using CoreOS as the base OS and Terraform to do the heavy lifting and provisioning.

The first step is to create your cluster.  I noticed in the AWS console there was a configuration wizard that guides you through your first cluster which was annoying because there wasn’t a clean way to just create the cluster.  So you will need to follow the on screen instructions for getting your first environment set up.  If any of this is unclear there is a good guide for getting started with ECS here.

After your cluster has been created there is a menu that shows your ECS environments.

ECS cluster menu

 

 

 

 

 

 

 

 

 

 

Next, you will need to turn on the nodes that will be connecting to this cluster.  The first part of this is to get your cloud-config set up to connect to the cluster.  I used the CoreOS docs to set up the ECS agent, making sure to change the ECS_CLUSTER= section in the config.

#cloud-config

coreos:
  units:
  -
  name: amazon-ecs-agent.service
  command: start
  runtime: true
  content: |
  [Unit]
  Description=Amazon ECS Agent
  After=docker.service
  Requires=docker.service
  Requires=network-online.target
  After=network-online.target

  [Service]
  Environment=ECS_CLUSTER=my-cluster
  Environment=ECS_LOGLEVEL=warn
  Environment=ECS_CHECKPOINT=true
  ExecStartPre=-/usr/bin/docker kill ecs-agent
  ExecStartPre=-/usr/bin/docker rm ecs-agent
  ExecStartPre=/usr/bin/docker pull amazon/amazon-ecs-agent
  ExecStart=/usr/bin/docker run --name ecs-agent --env=ECS_CLUSTER=${ECS_CLUSTER} --env=ECS_LOGLEVEL=${ECS_LOGLEVEL} --env=ECS_CHECKPOINT=${ECS_CHECKPOINT} --publish=127.0.0.1:51678:51678 --volume=/var/run/docker.sock:/var/run/docker.sock --volume=/var/lib/aws/ecs:/data amazon/amazon-ecs-agent
  ExecStop=/usr/bin/docker stop ecs-agent

Note that the Environment=ECS_CLUSTER=my-cluster, this is the most important bit to get the server to check in to your cluster, assuming you named it “my-cluster”.  Feel free to add any other values your infrastructure may need.  Once you have the config how you want it, run it through the CoreOS cloud-config validator to make sure it checks out.  If everything looks okay there, your cloud-config should be ready to go.

You can find more info about how to configure the ECS agent in the docs here.

Once you have your cloud-config in order, you will need to get your Terraform “recipe” set up.  I used this awesome github project as the base for my own project.  The Terraform logic from there basically creates an AWS launch config and autoscaling group (and uses the cloud-config from above) to launch instances in to your cluster.  And the ECS agent takes care of the rest, once your servers are up and the agent is reporting in to the cluster.

launch_config.tf

resource "aws_launch_configuration" "ecs" {
  name = "ECS ${var.cluster_name}"
  image_id = "${var.ami}"
  instance_type = "${var.instance_type}"
  iam_instance_profile = "${var.iam_instance_profile}"
  key_name = "${var.key_name}"
  security_groups = ["${split(",", var.security_group_ids)}"]
  user_data = "${file("../cloud-config/ecs.yml")}"

  root_block_device = {
    volume_type = "gp2"
    volume_size = "40"
  }
}

Notice the user_data section.  This is where we inject the cloud config from above to provision CoreOS and launch the ECS agent.

autoscaler.tf

resource "aws_autoscaling_group" "ecs-cluster" {
  availability_zones = ["${split(",", var.availability_zones)}"]
  vpc_zone_identifier = ["${split(",", var.subnet_ids)}"]
  name = "ECS ${var.cluster_name}"
  min_size = "${var.min_size}"
  max_size = "${var.max_size}"
  desired_capacity = "${var.desired_capacity}"
  health_check_type = "EC2"
  launch_configuration = "${aws_launch_configuration.ecs.name}"
  health_check_grace_period = "${var.health_check_grace_period}"

  tag {
    key = "Env"
    value = "${var.environment_name}"
    propagate_at_launch = true
  }

  tag {
    key = "Name"
    value = "ECS ${var.cluster_name}"
    propagate_at_launch = true
  }
}

There are a few caveats I’d like to highlight with this approach.  First, I already have an AWS infrastructure in place that I was testing agains this.  So I didn’t have to do any of the extra work to create a VPC, or a gateway for the VPC.  I didn’t have to create the security groups and subnets either, I just added them to the Terraform code.

The other caveat is that if you want to use the Github project I linked to you will need to make sure that you populate the variables with your own environment specific values.  That is why having the VPC, subnets and security groups was handy for me.  Be sure to browse through the variables.tf file and substitute in your own values.  As an example,  I had to update the variables to use the CoreOS 766.4.0 image.  This AMI will be specific to your AWS region so make sure to look up the AMI first.

variable "ami" {
  /* CoreOS 766.4.0 */
  default = "ami-dbe71d9f"
  description = "AMI id to launch, must be in the region specified by the region variable"
}

Another part I had to modify to get the Github project to work was adding in my AWS credentials which look similar to the following.  Make sure to update these variables with your ID and secret.

provider "aws" {
  access_key = "${var.access_key}"
  secret_key = "${var.secret_key}"
  region = "${var.region}"
}

variable "access_key" {
  description = "AWS access key"
  default = "XXX"
}

variable "secret_key" {
  description = "AWS secret access key"
  default = "xxx"
}

Make sure to also copy/edit the autoscaling.tf and launch_config.tf files to reflect anything that is specific to your environment (Terraform will complain if there are issues).

After you have combed through the variables.tf and updated the Terraform files to your liking you can simply run terraform plan -input=false and see how Terraform will create the ASG for you.

If everything looks good, you can run terrafrom apply -input=false and Terraform will go out and start building your new ECS infrastructure for you.  After a few minutes check the EC2 console and your launch config and autoscaling group should be in there.  If that stuff all looks okay, check the ECS console and your new servers should show up and be ready to go to work for you!

NOTE: If you are starting from scratch, it is possible to do all of the infrastructure provisioning via Terraform but it is too far out of the scope of this post to cover because there are a lot of steps to it.

Read More

Time Management

Thoughts on Working Remotely

I’d like to share a few nuggets that I have learned so far in my experience working as a remote employee.  I have been working from home for around a year and a half and have learned some lessons in my experience thus far.  While I absolutely recommend trying the remote option if possible, there are a few things that are important to know.

Working remotely is definitely not for everybody.  In order to be an effective remote employee you have to have a certain amount of discipline, internal drive and self motivation.  Additionally, you need to be a good communicator (covered below).  If you have trouble staying on task or finding things to do at work or even have issues working by yourself in an isolated environment, you will quickly discover that working remotely may be more stressful than working in an office where you get the daily interactions and guidance from others.

Benefits

That being said, I feel that in most cases, the positives outweigh the negatives.  Below are a few of the biggest benefits that I have discovered.

  • There is little to no commute.  Long commutes, especially in big cities create a certain amount of stress and strain that you simply don’t have to deal with when working from home.  As a bonus you save some cash on gas and miles of wear and tear on your vehicle.
  • It is easier to avoid distractions.  This of course depends on how you handle your work but if you are disciplined it becomes much easier to get work done with less distractions.  At home, if you manage to separate home from work (more on that topic below) then you don’t need to worry about random people stopping over to your desk to shoot the shit or bother you.  By avoiding simple distractions you can become much more productive in shorter periods of time.
  • No dress code.  This is a surprisingly simple but powerful bonus to working from home.  Having a criteria for dress code was actually stressful for me in previous jobs.  I always disagreed with having a dress code and didn’t understand why I couldn’t wear a t-shirt and jeans to work.  Now that I can wear whatever I want I feel more comfortable and more relaxed which leads to better productivity.
  • Schedule can be more flexible.  I can pick my own hours for the most part.  Obviously it is best to get in to a routine of working the same hours each day but if something comes up I can step out for a few hours and just make the hours up in the evening most in most cases and it won’t be a big deal to coworkers.  This flexibility is a great perk to working remotely and it allows you much more time to yourself when needed because you aren’t restricted to a set schedule.

Achieving a Work/life balance

Maintaining a balance between life at home and life at work can get very blurry when working from home as a telecommuter.  I would argue that finding a balance between personal life and work is the number one most important thing to work towards when making the transition from an on site employment because it directly leads to your happiness (or sorrow), which in turn influences all other aspects of your life, including activities and relationships outside of work.

It is super easy to get in to the habit of “always being around” and working extra and often time crazy hours when you are at home.  One thing that has helped in my own experience to improve the work/life balance and alleviate this always working thing is by creating routines.

I try to start work and end work at the same time of the day each day during the week. Likewise, I make a point to take breaks throughout the day to break up the time.  A few things I like to do are take a 30ish minute walk around the same time every day and I also have a coffee ritual in the morning that always precedes work time.  These daily cues help me get in to the flow of the day and to get my day started the same way every day.

Another mechanism I have discovered to help cope with the work hours is to leave work at work.  Find a way to create clear distinctions between home and work, either by creating an office at home where work stays or consider finding a coffee shop or co-working space.  As a side note, I have found 2-3 days working at a coffee shop/co-working space to be the best middle ground for me, but everybody is different so if you are new to remote work you will need to experiment.  That way you can have a place that represents what a workplace should be, and you when you leave that place, the work stays there.  It is very important to separate home from work if you don’t have a clear distinction between the two.

Some folks mention that it can get lonely.  I definitely agree with this sentiment.  On the up side, working in this type of environment can sort of force you to find ways to interact with people.  It can feel uncomfortable at first, but finding social activities will help alleviate the loneliness.  Coffee shops and co-working spaces are a great place to start.  I find that working in an environment with others helps mix things up and having the extra interaction really helps feeling like you are a part of a community.  These environments are a great solution if you are introverted and have a hard time getting out and meeting people.

Regardless of what exactly you do, it is absolutely critical to get out of your house.  This should be a no brainer but I can’t stress the importance enough.  Even if you’re just taking walks or going to the store, you need to make sure that you find things to do to get out of the house.  I have found some things that work but it is something again that you will need to experiment with.

If you are ambitious then I suggest getting involved in some other communities outside of work.  Meeting new people (outside of a work environment) is a very powerful tool in managing your work/life balance.  Obviously this advice works as well in more scenario’s than working remotely but I think it becomes much more important.  If you want some ideas for ways to get out or communities to join, feel free to email or comment and I can let you know what has worked for me.

Communicate

Another important piece of the social aspect that I have discovered is that it is VERY important to have many open communication channels with coworkers.  Google Hangouts, Slack, Screenhero, WebEx, Skype, email, IRC and any other collaboration tools you can find are super important for communicating with coworkers and for building relationships and culture in distributed work environments.  In my experience, if you are working as part of a team and aren’t a great communicator, relationships with coworkers can quickly become strained.

Also, having regular meetings with key members of your team is important.  A nice once a week check in with any managers is a good starting point.  It helps you keep track of what you’re doing and it helps others on your team understand the type of work you’re doing so you’re not as isolated.  Gaining the trust of your coworkers is always very important.

Conclusion

The most difficult balance to achieve when transitioning to a work from home opportunity for me, was maintaining a good work/life balance.  You are 100% responsible for how you choose to spend your time so it becomes important to make the right decisions when it comes to how to prioritize.

For example, one thing I have struggled with is how to work the right amount of time.  There was a stretch where I was working 12-14 days just because I kept finding more and more things to do.  While that is good for your employer, it is not good for you or anyone around you.  The work will always be there, so you have to find strategies to help you step away from work when you have put in enough hours for the day.

Everybody is different so if you are new to telecommuting/working remotely I encourage you to experiment with different techniques for managing your work/life balance.  While I feel that working remotely is for the most part a bonus, it still has its own set of issues so please be careful and don’t work too much, and especially don’t expend extra energy or get too stressed out about things you can’t control.

Read More