Setting up a pfSense NAT instance in AWS

One important aspect of cloud deployments that often get overlooked, especially at start ups is the aspect of security.  So I thought I would take some time to go through the process of setting up a NAT instance on AWS with full firewall capabilities.  There are instructions and documentation for this process which are very good but aren’t completely clear so I will attempt to fill in some of the gaps I ran in to when attempting to set this up myself.

There is one thing to take note of if you have used pfSense before.  This firewall isn’t free.  There is a slight hourly charge for this that ends up coming out to about $500/yr (which comes out to about $42/month).  If you look at other commercial solutions with similar functionality you are looking at thousands of dollars per month in costs.  Long story short, the cloud images of pfSense has a tiny tiny cost associated with it but is very much worth it.

Just for reference I put together a few comparison prices.

  • Barracuda web app firewall – ($1.04-1.76/hr) (up to ~$1300/month)
  • Vyatta  ($0.30-1.50/hr) (up to ~$1100/month)
  • Sophos UTM ($0.35-$2.80/hr) (up to ~$2000/month)
  • pfSense ($0.07/hr) ($42/month)

As you can see, pfSense is very reasonable compared to some of the other bigger players.  You can build an r3.8xlarge instance and the software price won’t change which doesn’t seem to be the case with others.  One bonus to choosing pfSense is that you automatically qualify for support by agreeing to the ToS when getting the pfSense AMI set up.

Finally, pfSense is rock solid being built on top BSD and is thoroughly tested.  I have been running pfSense on other projects outside of AWS for 5+ years and have never had an issue with it outside of a dead hard drive one time.  Other added benefits of choosing pfSense are that updates are frequent and thoroughly tested, tons of add-ons including IPS’s and VPN’s so additional functionality can be built on top and great community support as well.

Getting started

There are a few good resources that I found to be useful when working through this problem, which got me most of the way to a working setup.  They are listed below.

And here is the link to my question about how to do this on serverfault, there is some good detail in the post over there.

Setting up the NAT in pfSense

The first issue that was confusing was the issue of getting the network interfaces set up and configured.  For this setup you will need two interfaces, preferably with static IP addresses.  You will also need to make sure that you disable source/destination checks for the interface that will be acting as the LAN interface that the nat goes through.  Disabling source and destination checks is pretty straightforward and is detailed in pretty much all of the guides.

You should note that there will be tabs for firewalling for LAN as well as WAN, if you can keep these two straight it should be much easier to troubleshoot and configure your pfSense machine.  Out of the box, the firewall on pfSense will not be configured to allow your LAN interface to do any sort of NATing, you will need to manually create rules to get started.  If you check the WAN firewall tab you should notice some access rules but the LAN tab should be empty.  Most of the work we will be doing will be on the LAN firewall.

The first rule to set up to make things easier to troubleshoot is a ping rule.  There is a WAN rule for ping but not for LAN.  You can essentially copy the WAN rule into a new one and modify it to look similar to the following.

LAN firewall rule

 

 

 

 

 

 

 

 

 

 

 

 

 

This rule will work for the template for the other rules that need to be put in to place.  The other rules will be for outbound web access.   Just copy this rule in to a new rule and change the protcol to TCP and make one rule that allows port 80 and another that allows 443.  The resulting should look similar to what I have listed below.

firewall rules for nat

Just a quick note.  If at any point you are having trouble seeing traffic or are getting stuck in your troubleshooting, an excellent way to figure out what is going on is the logging that is provided by pfSense.  You can access all of the various logs to see what is happening by selecting Status -> System Logs and the highlighting the firewall tab.

Modifying your outbound nat

Here is what your outbound NAT rule should look like.

outbound nat

 

 

 

 

 

 

 

 

 

 

 

 

 

Notice the “Networks_to_NAT” value in the source section.  This is a pfSense alias that can be used as a sort of variable to help ease management.  You can either use this alias or specify the local subnet you want to use here.  To check the values in your alias you can go to Firewall -> Aliases.

Conclusion

This setup will provide you with a nice easy way to manage your network in AWS.  The guides for setting up a NAT are nice and are a good first step but with a Firewall in place you can do so many other things, especially auditing that just aren’t available or viable with a straight AWS nat instance or that are way out of your price range with some of the other commercial solutions available.

pfSense also provides the capability to add more advanced tools like IDS/IPS, VPN and high availability if you choose so there is nice room for expansion.  Even if you don’t take advantage of all of the additional components of pfSense you will still have a rock solid firewall and nat instance that is suitable for production workloads at a fraction of the cost of other commercial solutions.

Read More

Autosnap AWS snapshot and volume management tool

This is my first serious attempt at a Python tool on github.  I figured it was about time, as I’ve been leveraging Open Source tools for a long time, I might as well try to give a little bit back.  Please check out the project and leave feedback by emailing, opening a github or issue or commenting here, I’d love to see what can be done with this tool, there are lots of bugs to shake out and things to improve.  Even better if you have some code you’d like to contribute, this is very much a work in progress!

Here is the project – https://github.com/jmreicha/autosnap.

Introduction

Essentially, this tool is designed to ease the management of the snapshot and volume lifecycle in an AWS environment.  I have discovered that snapshots and volumes can be used together to form a simple backup management system, so by simplifying the management of these resources, by utilizing the power of the AWS API, you can easily manage backups of your AWS data.

While this obviously isn’t a full blown backup tool, it can do a few handy things like leverage tags to create and destroy backups based on custom expiration dates and create snapshots based on a few other criteria, all managed with tags.  Another cool thing about handling backups this way is that you get amazing resiliency by storing snapshots to S3, as well as dirt cheap storage.  Obviously if you have a huge number of servers and volumes your mileage will vary, but this solution should scale up in to the hundreds, if not thousands pretty easily.  The last big bonus is that you can nice granularity for backups.

For example, if you wanted to keep a weeks worth of backups across all your servers in a region, you would simply use this tool to set an expiration tag of 7 days and voila.  You will have rolling backups, based on snapshots for the previous seven days.  You can get the backup schedule fairly granular, because the snapshots are tagged down to the hour. It would be easy to get them down to the second if that is something people would find useful, I could see DB snapshots being important enough but for now it is set to the hour.

The one drawback is that this needs to be run on a daily basis so you would need to add it to a cron job or some other tool that runs tasks periodically.  Not a drawback really as much of a side note to be aware of.

Configuration

There is a tiny bit of overhead to get started, so I will show you how to get going.  You will need to either set up a config file or let autosnap build you one.  By default, autosnap will help create one the first time you run it, so you can use this command to build it:

autosnap

If you would like to provide your own config, create a file called ‘.config‘ in the base directory of this project.  Check the README on the github page for the config variables and for any clarifications you may need.

Usage

Use the –help flag to get a feeling for some of the functions of this tool.

$ autosnap --help

usage: autosnap [--config] [--list-vols] [--manage-vols] [--unmanage-vols]
 [--list-snaps] [--create-snaps] [--remove-snaps] [--dry-run]
 [--verbose] [--version] [--help]

optional arguments:
 --config          create or modify configuration file
 --list-vols       list managed volumes
 --manage-vols     manage all volumes
 --unmanage-vols   unmanage all volumes
 --list-snaps      list managed snapshots
 --create-snaps    create a snapshot if it is managed
 --remove-snaps    remove a snapshot if it is managed
 --version         show program's version number and exit
 --help            display this help and exit

The first thing you will need to do is let autosnap manage the volumes in a region:

autosnap --manage-vols

This command will simply add some tags to help with the management of the volumes.  Next, you can take a look and see what volumes got  picked up and are now being managed by autosnap

autosnap --list-vols

To take a snapshot of all the volumes that are being managed:

autosnap --create-snaps

And you can take a look at your snapshots:

autosnap --list-snaps

Just as easily you can remove snapshots older than the specified expiration date:

autosnap --remove-snaps

There are some other useful features and flags but the above commands are pretty much the meat and potatoes of how to use this tool.

Conclusion

I know this is not going to be super useful for everybody but it is definitely a nice tool to have if you work with AWS volumes and snapshots on a semi regular basis.  As I said, this can easily be improved so I’d love to hear what kinds of things to add or change to make this a great tool.  I hope to start working on some more interesting projects and tools in the near future, so stay tuned.

Read More

Selecting a Cloud Provider

Well the deed is done.  I finally have migrated the blog off my home server and onto a hosted provider.  The site is finally starting to get big enough that I felt like a migration would be a step in the right direction.  When I originally created this blog it was more of an experiment than anything else, but over the past 2 years I have seen the blog and my writing grow in directions I really didn’t anticipate when it was created, which is very exciting for me.

Due to this growth, one of my main concerns is stability.  With the stage that this project is in now I just want a place that has good bandwidth and is available 24×7 when I want to write something.  I don’t want to have to worry about the power going out at my home or any sort of service disruption to my internet to cause my site to be unreachable.  My traffic numbers are relatively low if compared to some other popular websites but they aren’t anything to sneeze at either now that the blog and my writings have become more established and so it is important for me to have the site available all the time to people.  If you’re interested in the migration comment or send me a note and I can do a quick write up or go into more depth, but I figured I’d spare the details because there are already a number of good guides out there on how to do it.  The only real issue was upgrading from Apache 2.2 -> 2.4.

I’d like to take some time and talk about moving to a cloud provider.  It may be an unfamiliar process to some so there might be a few good takeaways by covering the topic briefly.  Cloud hosting and cloud technologies are evolving to be much more than just a fad, and a lot of companies are trying to position themselves for the next generation of cloud computing moving forward.  Choosing a cloud provider offers a number of benefits including lower over head and maintenance costs with running servers and a data center.  It also alleviates infrastructure maintenance, stability issues and dealing with hardware failures and troubleshooting.

There are a very large number of cloud providers out there currently and the competition is fierce.  In fact the competition is so fierce right now that Google and Amazon have recently begun a price war.

  • Heroku
  • Amazon AWS
  • Joyent
  • Azure
  • Rackspace
  • Linode
  • DreamHost
  • Google Cloud Platform
  • Digital Ocean

In my current view they are all great in their own way.  By that I mean that each of these providers can provide value in their own way.  If you are a business looking to move to the cloud the AWS is the way to go.  The Amazon cloud has been tested and vetted by some of the largest cloud companies (Netflix, Pinterest, LinkedIn).  It has been around for a very long time in cloud years so Amazon has been able to work out most of the issues as well create a golden standard.  The trade off is that for somebody new to cloud computing the services and interface can be confusing.  There are a lot of bells and whistles, which many people do not need.

If you’re a Microsoft shop you might take a look at the Azure platform.  Azure does a really good job of integrating with other Microsoft products and services.  This would be a logical move for anybody that leverages Microsoft technologies.

Rackspace and Joyent both leverage OpenStack for their underlying architecture.  OpenStack is open source software so there are some really interesting things revolving around that platform and technology.

Ultimately I decided to go with Digital Ocean for this project for a couple of reasons.  First, the price was there.  My blog doesn’t require a lot of horsepower so I was able to spin up a 1GB, 1CPU, 30GB, 1TB bandwidth Ubuntu server on DO for $10/month.  The second reason that I like DO and which many other people are moving towards DO is that the setup and configuration process is stupidly simple and easy.  Create an account, setup a credit card and away you go, up into the clouds.  The process from start to having a new server up literally took me 10 minutes.

That is the beauty of DO’s approach to cloud.  Make things as simple as possible for people to get up and going.  Certainly there aren’t nearly as many features as many other platforms but for many scenarios people just want a server to play around with, and DO does a great job of making that possible.

The point I’m trying to get at here is that there are basically different tools for different jobs.  You need to evaluate what all is out there and how it will suit your needs.  If you aren’t a Microsoft shop then you might not need to use Azure.  The good news is that with all of this competition and rivalry prices are dropping and more options and niches are becoming available as products mature and as new providers enter the scene.  The cloud introduces some pretty neat features and technologies but ultimately you need to decide what you or your business is looking for before you make a decision.

Read More

Creating new a instance-store AMI for Amazon AWS EC2

This is a HOWTO build your own instance-store backed AMI image which is suitable for creating a Paid AMI. The motivation for doing this HOWTO is simple: I tried it, and it has a lot of little gotchas, so I want some notes for myself. This HOWTO assumes you’re familiar with launching EC2 instances, logging into them, and doing basic command line tasks.

Choosing a starting AMI

There’s a whole ton of AMIs available for use with EC2, but not quite so many which are backed by instance-store storage. Why’s that? Well, EBS is a lot more flexible and scalable. The instance-store images have a fairly limited size for their root partition. For my use case, this isn’t particularly important, and for many use cases, it’s trivial to mount some EBS volumes for persistant storage.

Amazon provides some of their Amazon Linux AMIs which are backed by EBS or instance-store, but they’re based on CentOS, and frankly, I’ve had so much troubles with CentOS in the past, that I just prefer my old standby: Ubuntu. Unfortunately, I had a lot of trouble finding a vanilla Ubuntu 12.04 LTS instance-store backed image through the AWS Console. They do exist, however, and they’re provided by Canonical. Thanks  guys!

Here’s a list of all the 12.04 Precise official AMIs:
http://cloud-images.ubuntu.com/releases/precise/release/

Conveniently, there’s a Launch button right there for each AMI instance. Couldn’t be easier!

Installing the EC2 Tools

Once you’ve got an instance launched and you’re logged in and sudo‘d to root, you’ll need to install the EC2 API and AMI tools provided by Amazon. The first step is, of course, to download them. Beware! The tools available through the Ubuntu multiverse repositories are unfortunately out of date.

The latest EC2 API tools can be found here:
http://aws.amazon.com/developertools/351

The latest EC2 AMI tools can be found here:
http://aws.amazon.com/developertools/368

I like to copy the download link and use wget to download them rather than scp‘ing them from my client machine.

sudo su
mkdir -p /tmp/ec2-tools
cd /tmp/ec2-tools
wget -O ec2-api-tools.zip 'http://www.amazon.com/gp/redirect.html/ref=aws_rc_ec2tools?location=http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip&token=A80325AA4DAB186C80828ED5138633E3F49160D9'
wget -O ec2-ami-tools.zip 'http://s3.amazonaws.com/ec2-downloads/ec2-ami-tools.zip'

Before we can install the EC2 tools, we need to install a few packages that our vanilla Ubuntu is lacking, namely zip and Java.

apt-get install zip
apt-get install openjdk-6-jre-lib
apt-get install ruby

Once we have those installed, we need to unzip our packages and install them to the /usr/local directory.

unzip "*.zip"
find . \( -name bin -o -name lib -o -name etc \) | \
    xargs -I path cp -r path /usr/local

Lastly we have to set the EC2_HOME and the JAVA_HOME environment variables for the EC2 tools to work properly. I like to do this by editing /etc/bash.bashrc so anyone on the machine can use the tools without issue.

echo -e "\nexport EC2_HOME=/usr/local\nexport JAVA_HOME=/usr\n" >> /etc/bash.bashrc

Once we log out and back in, those variables will be set, and the EC2 tools will be working.

# exit
$ sudo su
# ec2-version
1.6.7.4 2013-02-01

Customizing Your AMI

At this point, your machine should be all set for you to do whatever customization you need to do. Install libraries, configure boot scripts, create users, get your applications set up, anything at all. Once you’ve got a nice, stable (rebootable) machine going, then you can image it.

Bundling, Uploading and Registering your AMI

This is actually pretty easy, but I’ll still go through it. The Amazon documentation is fairly clear, and I recommend following along with that as well, as it explains all the options to each command.

Here’s the official Amazon documentation:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-snapshot-s3-linux.html

  1. Create an S3 bucket. This is where you’ll upload your AMI images. If you already have a bucket, you can use that.
  2. Download your AWS security certificates and copy your API keys. They can be found here: https://portal.aws.amazon.com/gp/aws/securityCredentials
  3. Copy your credentials to the instance you’re going to image. First, create a directory to store them in on your instance:
    mkdir -p /tmp/cert
    chmod 777 /tmp/cert
  4. Then copy them from the place you downloaded them on your client machine, to your instance:
    scp -i <keypair_name> pk-*.pem cert-*.pem ubuntu@<host_name>:/tmp/cert
  5. Bundle your instance image. The actual image bundle and manifest will end up in /tmp.
    cd /tmp/cert
    ec2-bundle-vol -k <private_keyfile> -c <certificate_file> \
        -u <user_id> -e <cert_location>
    cd /tmp
  6. Upload your bundled image. Note that <your-s3-bucket> should include a path that is unique to this image, such as my-bucket/ami/ubuntu/my-ami-1, otherwise things will get very messy for you, because an image consists of an image.manifest.xml file and many chunks which compose the image itself, which are generically named by default when you use this tool.
    ec2-upload-bundle -b <your-s3-bucket> -m <manifest_path> \
        -a <access_key> -s <secret_key>
  7. Register your new AMI.
    ec2-register <your-s3-bucket>/<path>/image.manifest.xml -n <image_name> \
        -O <your_access_key> -W <your_secret_key>

That’s it! You should be all set with a new AMI, which should also show up in the AWS Console.

Read More