ELK stack

Running ELK on Docker

I wrote a post awhile back about how to get the ElasticSearch + Logstash + Kibana stack set up and recently have been very involved with Docker so thought it would be appropriate to update that post with the new Docker way of doing things.

Update (5/9/15) – I have created a github repo containing configs for running this.  Reader Sergio also has a solution posted a similar solution on github, you can check it out here if you want to try it out.

I found a surprising lack of posts describing how to run the ELK stack with Docker and docker-compose.  This post is much longer and more detailed than usual so feel free to jump around to different sections for details on different components.  I am planning on doing a follow up on to this post with instructions about how to configure Logstash and the logstash-forwarder client as well as Kibana to do interesting things with logs stored in ElasticsSearch.

There are a lot of other posts about how to get the stack to work but they are either out of date already since the Docker world changes so fast or don’t cover specific details of how different bit work.  The other thing I have observed is that most of the other guides are not done with Docker which is something that makes life easier.

So the first thing that I’ll cover is how to build the Docker images.  If you are interested I can make this stuff available on the Docker hub as images or public Dockerfiles.  However, for this article and in genreal I strongly prefer to write my own Dockerfiles so I will be posting my custom configs and files here rather than pulling other (sometimes official) prebuilt images.

Logstash

The first component we will get set up is the Logstash server.  This setup is also using the log-courier input plugin.  Log-courier is a more customizable and flexible client for forwarding logs to Logstash.  I use both logstash-forwarder and log-courier in this configuration to allow for a more flexible setup.

The following is a Dockerfile that will build a Logstash 1.5.0 image.  One thing to note about this approach is that you can swap out the LOGSTASH_VER and the image will be updated to the correct version automatically and will be ready to be deployed whem the image gets rebuilt.

FROM ubuntu:14.04

ENV DEBIAN_FRONTEND noninteractive
ENV LOGSTASH_VER 1.5.0.rc2
WORKDIR /opt

# Dependencies
RUN apt-get update -qq && \
 apt-get install -y -qq \
 wget \
 python \
 openjdk-7-jre-headless

# Install Logstash
RUN wget --quiet "https://download.elasticsearch.org/logstash/logstash/logstash-$LOGSTASH_VER.tar.gz" -O "/opt/logstash-$LOGSTASH_VER.tar.gz" --no-check-certificate && \
 tar zxf logstash-$LOGSTASH_VER.tar.gz && \
 mv logstash-$LOGSTASH_VER logstash

# Install plugins
RUN /opt/logstash/bin/plugin update logstash-output-zeromq
RUN /opt/logstash/bin/plugin install logstash-input-log-courier

# Config files
ADD server.conf /etc/logstash/server.conf
ADD logstash-forwarder.key /etc/logstash/logstash-forwarder.key
ADD logstash-forwarder.crt /etc/logstash/logstash-forwarder.crt

# lumberjack port
EXPOSE 4545
# log-courier port
EXPOSE 4546

CMD /opt/logstash/bin/logstash -f /etc/logstash/server.conf

This will install version 1.5.0.r2 the logstash-input-log-courier input for logstash, add certificates for the forwarding clients and start Logstash with the server configuration.

In addition to this Dockerfile you will need to generate some certificates for logstash-forwarder clients and the Logstash server itself to use, as well as the server configuration used by the Logstash server.  Below I have a sample but extremely barebones server.conf configuration file.

input {
  lumberjack {
    port => 4545
    ssl_certificate => "/etc/logstash/logstash-forwarder.crt"
    ssl_key => "/etc/logstash/logstash-forwarder.key"
    codec => plain {
      charset => "ISO-8859-1"
    }
  }

  courier {
    port => 4546
    ssl_certificate => "/etc/logstash/logstash-forwarder.crt"
    ssl_key => "/etc/logstash/logstash-forwarder.key"
  }
}

output {
  elasticsearch {
    cluster => "elasticsearch"
  }
}

I will thicken this config up in the next post on how to customize Logstash and doing interesting things with Kibana.  For now, we are defining a courier and lumberjack input, used to ingest logs in to Logstash as well as one output, which is telling Logstash what to do with the logs, in this example it is just stuffing them in to ES.

To generate the certificates needed for logstash/logstah-forwarder you can either follow the instructions on the logstash-forwarder github page or you can use the following command to generate the certs and subsequently inject them in to the Docker image.  It should probably go without saying but make sure the version of openssl used to generate these is an update to date and secure version.

openssl req -x509 -nodes -sha256-days 1095 -newkey rsa:2048 -keyout logstash-forwarder.key -out logstash-forwarder.crt

You will need to follow a few prompts to fill out the certificate details, again you can reference the logstash-forwarder github page if you get stuck or are unsure of how to configure the certificate.

After the certs are generated make sure that the names of the output file names match up to the names in the above Dockerfile and that is pretty much it for getting Logstash ready.

ElasticSearch

The ElasticSearch image is also pretty straight forward.  This will build the specified version from the ES_VER variable which is 1.4.4 currently and will configure a few things.

# Pull base image.
FROM dockerfile/java:oracle-java7

# Set install version
ENV ES_PKG_NAME elasticsearch-1.4.4

# Install ElasticSearch
RUN \
 cd / && \
 wget https://download.elasticsearch.org/elasticsearch/elasticsearch/$ES_PKG_NAME.tar.gz && \
 tar xvzf $ES_PKG_NAME.tar.gz && \
 rm -f $ES_PKG_NAME.tar.gz && \
 mv /$ES_PKG_NAME /elasticsearch

# Define mountable directories
VOLUME ["/data"]

# Define working directory
WORKDIR /data

# Custom ES config
ADD elasticsearch.yml /elasticsearch/config/elasticsearch.yml

# Define default command
CMD ["/elasticsearch/bin/elasticsearch"]

# Expose ports
EXPOSE 9200
EXPOSE 9300

The main key to getting ES to work is getting the configuration file set up correctly.  In this example we are mounting local storage (/data) from the host OS in to the container so that if the container dies the data and indexes and other data aren’t wiped out.  There are also a few other security configuration settings that get set here to lock things down and also to make Kibana 4 happy.

http.cors.allow-origin: "/.*/"
http.cors.enabled: true
cluster.name: elasticsearch
node.name: "logstash.domain.com"
path:
 data: /data/index
 logs: /data/log
 plugins: /data/plugins
 work: /data/work

ES is very straight forward to set up, you just set it up and it runs.

Kibana

This will build the newest iteration of Kibana, which is 4.0.0 as of this writing.  If you aren’t living on the bleeding edge and want to know how to get Kibana 3.x.x working let me know and I will post the configuration for it.

FROM ubuntu:14.04

# Dependencies
RUN apt-get update -qq
RUN sudo apt-get install -y -qq nginx-full wget vim

# Kibana
RUN mkdir -p /opt/kibana
RUN wget https://download.elasticsearch.org/kibana/kibana/kibana-4.0.0-linux-x64.tar.gz -O /tmp/kibana.tar.gz && \
 tar zxf /tmp/kibana.tar.gz && mv kibana-4.0.0-linux-x64/* /opt/kibana/

# Configs
ADD kibana.yml /opt/kibana/config/kibana.yml

EXPOSE 5601

CMD /opt/kibana/bin/kibana

So the Dockerfile is pretty straight forward but there were a few tidbits to be aware of.  Kibana 4.x.x if significantly different in how it works than 3.x.x so you will need to make a few adjustments if you are familiar with the old version.

You will need to pick and choose the bits out of the following configuration to suit your needs.  For example, you will need to adjust the elasticsearch_url, username, password and will need to decide whether to turn ssl on or off.  There are obviously more options but most of them probably don’t need to be adjusted for now.  Here is what the sample config looks like.

port: 5601
host: "0.0.0.0"
elasticsearch_url: "http://logstash.domain.com:9200"
elasticsearch_preserve_host: true
kibana_index: ".kibana"
default_app_id: "discover"
request_timeout: 300000
shard_timeout: 0
verify_ssl: false

# Plugins that are included in the build, and no longer found in the plugins/ folder
bundled_plugin_ids:
 - plugins/dashboard/index
 - plugins/discover/index
 - plugins/doc/index
 - plugins/kibana/index
 - plugins/markdown_vis/index
 - plugins/metric_vis/index
 - plugins/settings/index
 - plugins/table_vis/index
 - plugins/vis_types/index
 - plugins/visualize/index

That’s pretty much it, most of the difficulty of getting the new version of Kibana working is in the configuration so if you want to tweak things or if something isn’t working that is the first place to look.

Docker Compose (glue the pieces together)

This is an integral part of the setup.  This is what controls the different containers and what glues everything together.  Luckily it is easy to get set up and working.  If you aren’t familiar, docker-compose was recently rebranded from the old “fig” tool which has been branded as a Docker orchestration tool for running complex Docker applications easily.

The official docs are pretty good and detailed so you can visit their site if you have any questions about how to install or how to get any of the components working here.

The first step is to download and install docker compose.  Here I am using and Ubuntu system.

sudo pip install -U docker-compose

There are a few docker-compose command line commands to be familiar with which we’ll get to next, but first I will post the sample docker-compose configuration file to test out your ELK stack.

kibana:

 build: /home/<user>/elk/kibana/4.0.0
 restart: always
 ports:
 - "5601:5601"
 links:
 - "elasticsearch:elasticsearch"

elasticsearch:

 build: /home/<user>/elk/elasticsearch/1.4.4
 restart: always
 ports:
 - "9200:9200"
 - "9300:9300"
 volumes:
 - "/data:/data"

logstash:

 build: /home/<user>/elk/logstash/logstash-1.5.0
 restart: always
 ports:
 - "4545:4545"
 - "4546:4546"

Most of the configuration is straight forward.  Here are the main commands to get everything stitched to gether and working.

  • docker-compose build (from the directory the docker-compose.yml file is in)
  • docker-compose up (to test the stack)
  • docker-compose kill (bring it down)

After you have all the issues ironed out building and running and the stack is stable with no errors on start you can start up the stack in detached mode.

  • docker-compose up -d

Additionally, you can look at the logs if something smells fishy.

  • docker-compose logs

Design considerations

One thing that readers might wonder about is the scalability of this setup.  This will scale up very easily but not out.  However, this should be able to handle up to 100k events/second on the Logstash end so there will be other bottlenecks before the components (ES and Logstash) fall down.  I haven’t pushed my own setup this far yet but have been able to get to around 30k/sec before Logstash dies, which I’m still investigating.  Even with that amount of activity and Logstash choking, ES and Kibana don’t get affected.

So if you use this as a guide for a production setup I would recommend that you use a decently sized server, at least 4 CPU, 8 GB memory and adjust the memory and cpu options for the Logstash component if you plan on throwing a lot of logs at it (>30k/s).  I will revisit once I have worked out all the performance issues with some best practices for making Logstash run more smoothly.

I would be interested in knowing how to scale this out if anybody has any recommendations but this setup should scale up decently at least for most scenarios.  I have not played around with ES sharding across hosts but I imagine that it wouldn’t be super complicated, especially using container volume mounts to store the data and indexes at the hose OS level.

Read More

Mount an external EBS volume in AWS

Creating and attaching external volumes is one of those things in administration that is really nice to know how to do but for me is also one that doesn’t happen every day so it is really easy to forget how to do which makes it a little bit more painful, especially with deadlines and people watching over your shoulder.  So, having said that, I think it is probably worth writing a post about how to do it because it happens just enough that I have trouble getting everything straight, and I’m sure others run in to this as well, so that’s what I will be writing about today.

There is  good documentation for how to do this but there are a lot of separate steps so consolidating the components might be helpful to readers who stumble across this.  I’m sure there are other ways to accomplish this but I don’t think it is necessary to cover everything here.

Create your “floating volume”

This step is straight forward.  In the AWS EC2 console choose the type of volume this will be (SSD or magnetic), availability zone, and any other options here.

create ebs volume

After your volume has been created you will want to attach it to an instance.  This part is important because the changes could break your OS volume if you write to your fstab file incorrectly.  In this example I am choosing to attach the EBS volume as /dev/xvdf, but you could name it differently if it corresponds to your setup.

attach ebs volume

After the volume has been mounted you can check that it has been picked up by the OS by either checking the /dev directory or by running fdisk -l and looking for the size of the disk you just attached.

It is worth pointing out that all of the steps in the AWS console can alternatively be done with the aws-cli tool.  It is probably easier but for the sake of time and illustration I am leaving those steps out.  Feel free to reach out if you are interested in the cli tool and I can update this post.

If you run fdisk -l you will notice that the device is empty, so you will need to format the disk.  In this instance I am formatting the disk as ext4.  So use the following command to format it.

sudo mkfs.ext4 /dev/xvdf

After the volume has been formatted you can mount it to your OS.

Attaching the volume

sudo mkdir /data
sudo mount -t ext4 /dev/xvdf /data

If you need to resize the filesystem for whatever reason, you can use the resize2fs command.

sudo resize2fs <mount point>
sudo resize2fs /dev/xvdf

Here you will create the directory (if it doesn’t already exist) to mount the volume to and then mount it.  At this point it would be fine to be done if you just needed temporary access to the storage on this device.  But if you want your mount to persist and to survive a reboot then you just add an entry to your /etc/fstab file to make sure the /data directory gets the volume mounted to it after a reboot.  Something like the following would work.

/dev/xvdf       /data   ext4    defaults,nobootwait        0       0

The entry is pretty easy to follow but may be confusing for those who are not familiar with how fstab works.  I will break down the various components here.

The first parameter is the location of the volume (/dev/xvdf) and is referred to as the file_system field.

The second parameter specifies where to mount the volume to (/data) and is referred to as the dir field.

The third field is the type.  This is where you specify the file system type or device to be mounted.  If you didn’t format this volume previously it would crate problems for OS when it tried to load in your volume from this file.

The fourth section is the options for the mount.  Here, the defaults,nobootwait section is very important.  If you don’t have the nobootwait option specified here then your OS could potentially hang on boot up if it couldn’t find the specified volume, so this option helps escape it if there are any problems.

The fifth field is to either enable or disable the dump option.  Unless you are familiar with or use the dump command you will almost always set this to 0.

The last section is the pass section.  This simply tells the OS if it should run an fsck or not on this volume.  Here I have it set to 0 so it doesn’t get checked but for OS volumes, this could be important to turn on.

Next steps

There are many more things you can do with fstab so if you are interested in other options for how to mount volumes you can look at the fstab documentation for more insights and information.

If you ever wanted to float this volume to another host it would be easy to do, and would not require any new or special formatting since this was already taken care of here.  The steps would looks similar to the following.

  • Unmount volume from current OS
  • Remove entry in /etc/fstab for volume mount
  • Detach mount in AWS console from current OS
  • Attach mount to new OS
  • Mount volume manually in new OS to test if it works
  • If the mount works add a new entry in /etc/fstab
  • Done

So that’s pretty much it.  Hopefully this is useful for everybody.

Read More

Kubernetes resize and rolling updates

If you haven’t heard of or used Kubernetes yet, I highly recommend taking a look (see the link below).  I won’t take too much time here today to talk about the Kubernetes project because there is just too much to cover.  Instead I will be writing a series of posts about how to work with Kubernetes and share some tricks and tips that I have discovered in my experiences so far with the tool.  Since the project is still very young and moving incredibly quickly, the best place to get information is either the IRC channel (#google-containers), the mailing list, or their github project.  Please go look at the github project if you are new to Kubernetes, or are interested in learning more about it, especially their docs and examples sections.

As I said, updates and progress have been extremely fast paced, so it isn’t uncommon for things in the Kubernetes project to seem obselete before they have even been implemented.  For example, the command line tool for interacting with a Kubernetes cluster has already changed faces a few times, which was confusing to me when I first started out.  Kubecfg is on the way out and the project maintainers are working on removing old references to it.  On the flip side, the kubectl command is maturing quite nicely and will be around for awhile, along with the subcommand that I will be describing.

Now that I have all the basic background stuff out of the way; the version of kubectl I am using for this demonstration is v0.9.1.  If you just discovered Kubernetes or have been using kubecfg (as explained above) you will want to make sure to get more familiar with kubectl because it is the preferred tool going forward, at least at this point.

There are a few handy subcommands that come baked in to the kubectl command.  The first is the resize command.  This command allows you to scale the number of running containers being managed by Kubernetes up or down on the fly.  Obviously this can be really powerful!  The syntax is pretty straight forward and below I have an example listed.

kubectl resize –current-replicas=6 –replicas=0 rc test-rc

The –current-replicas argument is optional, the –replicas defines the *desired* number of replicas to have runing, rc specifies this is a replication controller, and finally, test-rc is the name of the replication controller to scale.   After you scale your replication controller you can check out the status quickly via the following command.

kubectl get pod

Another handy tool to have when working with Kubernetes is the ability to deploy new images as a rolling update.

 kubectl rollingupdate test-rc -f test-rc-2.yml –update-period=”10s”

The rollingupdate command takes a few arguments.  The first is the name of the current replication controller that you would like to update.  The second is to replace it with the yml file of the new replication controller and the third optional argument is the –update-period, which allows a user to override the default time that it takes to spin up a new container and spin down an old.

Below is an example of what your test-rc-2.yml file may look like.

kind: ReplicationController
apiVersion: v1beta1
id: test-rc-2
namespace: default
desiredState:
 replicas: 1
 replicaSelector:
   name: test-rc
   version: v2
 podTemplate:
 labels:
   name: test-rc
   version: v2
 desiredState:
 manifest:
 version: v1beta1
 id: test-rc
 containers:
   - name: test-image
   image: test/test:new-tag
   imagePullPolicy: PullAlways
 ports:
   - name: test-port
   containerPort: 8080

There are a few important things to notice.  The first is that the id must be unique, it can’t be a name that is already in use by another replication controller.  All of the label names should remain the same except for the version.  The version is used to signify the new replication controller is a running a new docker image.  The version number should be unique, which will help keep track of which image version is running.

Another thing to note.  If your original replication controller did not contain a unique key (like version) then you will need to update the original replication controller first, adding a unique key, before attempting to run the rolling update.

If both replication controllers don’t have the same format you will get an error similar to this.

test-rc.yml must specify a matching key with non-equal value in Selector for <selector name>

So that’s pretty much it for now.  I will revisit this post again in the future as new flags and subcommands are added to kubectl for managing and updating replication controllers.  I also plan on writing a few more posts about other aspects and areas of kubectl and running Kubernetes, so please check back soon!

Read More

Setting up a pfSense NAT instance in AWS

One important aspect of cloud deployments that often get overlooked, especially at start ups is the aspect of security.  So I thought I would take some time to go through the process of setting up a NAT instance on AWS with full firewall capabilities.  There are instructions and documentation for this process which are very good but aren’t completely clear so I will attempt to fill in some of the gaps I ran in to when attempting to set this up myself.

There is one thing to take note of if you have used pfSense before.  This firewall isn’t free.  There is a slight hourly charge for this that ends up coming out to about $500/yr (which comes out to about $42/month).  If you look at other commercial solutions with similar functionality you are looking at thousands of dollars per month in costs.  Long story short, the cloud images of pfSense has a tiny tiny cost associated with it but is very much worth it.

Just for reference I put together a few comparison prices.

  • Barracuda web app firewall – ($1.04-1.76/hr) (up to ~$1300/month)
  • Vyatta  ($0.30-1.50/hr) (up to ~$1100/month)
  • Sophos UTM ($0.35-$2.80/hr) (up to ~$2000/month)
  • pfSense ($0.07/hr) ($42/month)

As you can see, pfSense is very reasonable compared to some of the other bigger players.  You can build an r3.8xlarge instance and the software price won’t change which doesn’t seem to be the case with others.  One bonus to choosing pfSense is that you automatically qualify for support by agreeing to the ToS when getting the pfSense AMI set up.

Finally, pfSense is rock solid being built on top BSD and is thoroughly tested.  I have been running pfSense on other projects outside of AWS for 5+ years and have never had an issue with it outside of a dead hard drive one time.  Other added benefits of choosing pfSense are that updates are frequent and thoroughly tested, tons of add-ons including IPS’s and VPN’s so additional functionality can be built on top and great community support as well.

Getting started

There are a few good resources that I found to be useful when working through this problem, which got me most of the way to a working setup.  They are listed below.

And here is the link to my question about how to do this on serverfault, there is some good detail in the post over there.

Setting up the NAT in pfSense

The first issue that was confusing was the issue of getting the network interfaces set up and configured.  For this setup you will need two interfaces, preferably with static IP addresses.  You will also need to make sure that you disable source/destination checks for the interface that will be acting as the LAN interface that the nat goes through.  Disabling source and destination checks is pretty straightforward and is detailed in pretty much all of the guides.

You should note that there will be tabs for firewalling for LAN as well as WAN, if you can keep these two straight it should be much easier to troubleshoot and configure your pfSense machine.  Out of the box, the firewall on pfSense will not be configured to allow your LAN interface to do any sort of NATing, you will need to manually create rules to get started.  If you check the WAN firewall tab you should notice some access rules but the LAN tab should be empty.  Most of the work we will be doing will be on the LAN firewall.

The first rule to set up to make things easier to troubleshoot is a ping rule.  There is a WAN rule for ping but not for LAN.  You can essentially copy the WAN rule into a new one and modify it to look similar to the following.

LAN firewall rule

 

 

 

 

 

 

 

 

 

 

 

 

 

This rule will work for the template for the other rules that need to be put in to place.  The other rules will be for outbound web access.   Just copy this rule in to a new rule and change the protcol to TCP and make one rule that allows port 80 and another that allows 443.  The resulting should look similar to what I have listed below.

firewall rules for nat

Just a quick note.  If at any point you are having trouble seeing traffic or are getting stuck in your troubleshooting, an excellent way to figure out what is going on is the logging that is provided by pfSense.  You can access all of the various logs to see what is happening by selecting Status -> System Logs and the highlighting the firewall tab.

Modifying your outbound nat

Here is what your outbound NAT rule should look like.

outbound nat

 

 

 

 

 

 

 

 

 

 

 

 

 

Notice the “Networks_to_NAT” value in the source section.  This is a pfSense alias that can be used as a sort of variable to help ease management.  You can either use this alias or specify the local subnet you want to use here.  To check the values in your alias you can go to Firewall -> Aliases.

Conclusion

This setup will provide you with a nice easy way to manage your network in AWS.  The guides for setting up a NAT are nice and are a good first step but with a Firewall in place you can do so many other things, especially auditing that just aren’t available or viable with a straight AWS nat instance or that are way out of your price range with some of the other commercial solutions available.

pfSense also provides the capability to add more advanced tools like IDS/IPS, VPN and high availability if you choose so there is nice room for expansion.  Even if you don’t take advantage of all of the additional components of pfSense you will still have a rock solid firewall and nat instance that is suitable for production workloads at a fraction of the cost of other commercial solutions.

Read More

Add environment variable file to fig

If you haven’t heard of fig and are using Docker, you need to check it out.  Basically Fig is a tool that allows users to quickly create development environments using Docker.  Fig alleviates the complexity and tediousness of having to manually bring containers up and down, stitch them together and basically orchestrate a Docker environment.  On top of this, Fig offers some other cool functionality, for example, the ability to scale up applications.  I am excited to see what happens with the project because it was recently merged in to the Docker project.  My guess is that there will be many new features and additions to either Docker if Fig gets rolled in to the Docker core project.  For now, you can check out Fig here if you haven’t heard of it and are interested in learning more.

One issue that I have run in to is that there is currently not a great way to handle a large number of environment variables in fig.  In Docker there is an option that allows a user to pass in an environment variable file with the –env file <filename> flag.  To do the same with Fig in its current form, you are forced to list out each individual environment variable in your configuration which can quickly become tedious and confusing.

There is a PR out for adding in the ability to pass an environment variable file in to fig via the env-file option in a fig.yml file.  This approach to me is much easier than adding each environmental variable separately to the configuration with the environment option as well as having to update the fig.yml configuration if any of the values ever change.  I know that functionality like this will get merged eventually but until then I have been using the PR as a workaround to solve this issue, I think that this is also a good opportunity to show people how to get a project working manually with custom changes.  Luckily the fix isn’t difficult.

This post will assume that you have git, python and pip installed.  If you don’t have these tools installed go ahead and get that done.  The first step is to clone the fig project on github on to your local machine, see above for the link to the PR.

git clone [email protected]:docker/fig.git

Jump in to the fig project you just cloned and edit the service.py file.  This is the file that handles the processing of environment variables.  There are a few sections that need to be updated in the config.  Check the PR to be sure, but at the time of the writing, the following code should be added.

Line 55

- supported_options = DOCKER_CONFIG_KEYS + ['build', 'expose']
+ supported_options = DOCKER_CONFIG_KEYS + ['build', 'expose', 'env-file']

Line 318

+ def _get_environment(self):
+ env = {}
+
+ if 'env-file' in self.options:
+ env = env_vars_from_file(self.options['env-file'])
+
+ if 'environment' in self.options:
+ if isinstance(self.options['environment'], list):
+ env.update(dict(split_env(e) for e in self.options['environment']))
+ else:
+ env.update(self.options['environment'])
+
+ return dict(resolve_env(k, v) for k, v in env.iteritems())
+

LIne 352

- if 'environment' in container_options:
- if isinstance(container_options['environment'], list):
- container_options['environment'] = dict(split_env(e) for e in container_options['environment'])
- container_options['environment'] = dict(resolve_env(k, v) for k, v in container_options['environment'].iteritems())
+ container_options['environment'] = self._get_environment()

Line 518

+
+
+def env_vars_from_file(filename):
+ """
+ Read in a line delimited file of environment variables.
+ """
+ env = {}
+ for line in open(filename, 'r'):
+ line = line.strip()
+ if line and not line.startswith('#') and '=' in line:
+ k, v = line.split('=', 1)
+ env[k] = v
+ return env

That should be it.  Now you should be able to install fig with the new changes.  Make sure you are in the root fig directory that contains the setup.py file.

sudo python setup.py develop

Now you should be able to edit your fig.yml file to reflect the changes that have been added to fig via env-file.  Here is what a sample configuration might look like.

testcontainer:
 image: username/testcontainer
 ports:
 - "8080:8080"
 links:
 - "mongodb:mongodb"
 env-file: "/home/username/test_vars"

Notice that nothing else changed.  But instead of having to list out environment variables one at a time we can simply read in a file.  I have found this to be very useful for my workflow, I hope others can either adapt this or use this as well.  I have a feeling this will get merged in to fig at some point but for now this workaround works.

Read More