Mount a volume using Ignition and Terraform

Sometimes when provisioning a server you may want to configure and provision storage as part of the bootstrapping and booting process.  For example, the other day I ran into an issue where I needed to define a disk, partition it, mount it to a specified location and then create a few directories in it.  It turned out to be surprisingly not straight forward to provision this storage and I learned quite a few things that I thought were worth sharing.

I’d just like to mention that ignition works like magic.  If you aren’t familiar, Ignition is basically a tool to help provision and configure servers, very similar to cloud-config except by default Ignition only runs once, on first boot.  The magic of Ignition is that it injects itself into initramfs before the OS ever eve boots and manipulating the system.  Ignition can be read in from  remote URL so that it can easily be provisioned in bare metal infrastructures.  There were several pieces to this puzzle.

The first was getting down all of the various ignition configuration components in Terraform.  Nothing was particularly complicated, there was just a lot of trial and error to get everything working.  Terraform has some really nice documentation for working with Ignition configurations, I’d recommend starting there and just playing around to figure out some of the various bits and pieces of configuration that Ignition can do.  There is some documentation on Ignition troubleshooting as well which I found to be helpful when things weren’t working correctly.

Below each portion of the Ignition configuration gets declared inside of a “ignition_config” block.  The Ignition configuration then points towards each invidual component that we want Ignition to configure. e.g. systemd, filesystem, directories, etc.

data "ignition_config" "staging_rancher_host_stateful" {
  systemd = [
     "${data.ignition_systemd_unit.mount_data.id}",
  ]

  filesystems = [
    "${data.ignition_filesystem.data_fs.id}",
  ]

  directories = [
    "${data.ignition_directory.data_dir.id}",
  ]

  disks = [
    "${data.ignition_disk.data_disk.id}",
  ]
}

This part of the setup is pretty straight forward.  Create a data block with the needed ignition configuration to mount the disk to the correct location,  format the device if it hasn’t already been formatted and create the desired directory and then create the Systemd unit to configure the mount point for the OS.  Here’s what each of the data blocks might look like.

data "ignition_filesystem" "data_fs" {
   name = "data"

  mount {
    device = "/dev/xvdb1"
    format = "ext4"
  }
}

data "ignition_directory" "data_dir" {
  filesystem = "data"
  path = "/data"
  uid = 500
  gid = 500
}

data "ignition_disk" "data_disk" {
  device = "/dev/xvdb"

  partition {
    number = 1
    start = 0
    size = 0
  }
}

Next, create the Systemd unit.

data "ignition_systemd_unit" "mount_data" {
  content = "${file("./data.mount")}"
  name = "data.mount"
}

Another challenge was getting the Systemd unit to mount the disk correctly.  I don’t work with Systemd frequently so initially had some trouble figuring this part out.  Basically, Systemd expects the service/unit definition name to EXACTLY match what’s declared inside the “Where” clause of the service definition.

For example, the following configuration needs to be named data.mount because that is what is defined in the service.

[Unit]
Description=Mount /data
Before=local-fs.target

[Mount]
What=/dev/xvdb1
Where=/data
Type=ext4

[Install]
WantedBy=local-fs.target

After all the kinks have been worked out of the Systemd unit(s) and other above Terraform Ignition configuration you should be able to deploy this and have Ignition provision disks for you automatically when the OS comes up.  This can be extended as much as needed for getting initial disks  set up correctly and is a huge step in automating your infrastructure in a nice repeatable way.

There is currently an open issue with Ignition currently where it breaks when attempting to re-provision a previously configured disk on a new machine.  Basically the Ignition process chokes because it sees the device has already been partitioned and formatted and can’t do it again.  I ran into this scenario where I was trying to create a basically floating persistent data EBS volume that gets attached to servers in an autoscaling group and wanted to allow the volume to be able to move around freely if the server gets killed off.

Read More

Bash tricks

bash

Update 2/18/18 – add some handy alt shortcuts

Bash is great.  As I have discovered over the years, Bash contains many different layers, like a good movie or a fine wine.  It is fun to explore and expose these different layers and find uses for them.  As my experience level has increased, I have (slowly) uncovered a number of these features of Bash that make life easier and worked to incorporate them in different ways into my own workflows and use them within my own style.

The great thing about fine arts, Bash included, is that there are so many nuances and for Bash, a huge number of features and uses, which makes the learning process that much more fun.

It does take a lot of time and practice to get used to the syntax and to become effective with these shortcuts.  I use this page as a reference whenever I think of something that sounds like it would be useful and could save time in a script or a command.  At first, it may take more time to look up how to use these shortcuts, but eventually, with practice and drilling will become second nature and become real time savers.

Shell shortcuts

Navigating the Bash shell is easy to do.  But it takes time to learn how to do well.  Below are a number of shortcuts that make the navigation process much more efficient.  I use nearly all of the shortcuts daily (except Ctrl + t and Ctrl + xx, which I only recently discovered).  In a similar vein, I wrote a separate post long ago about setting up CLI shortcuts on iterm that can further augment the capabilities of the CLI.

This is a nice reference with more examples and features

  • Ctrl + a => Return to the start of the command you’re typing
  • Ctrl + e => Go to the end of the command you’re typing
  • Ctrl + u => Cut everything before the cursor to a special clipboard
  • Ctrl + k => Cut everything after the cursor to a special clipboard
  • Ctrl + y => Paste from the special clipboard that Ctrl + u and Ctrl + k save their data to
  • Ctrl + t => Swap the two characters before the cursor (you can actually use this to transport a character from the left to the right, try it!)
  • Ctrl + w => Delete the word / argument left of the cursor
  • Ctrl + l => Clear the screen
  • Ctrl + _ => Undo previous key press
  • Ctrl + xx => Toggle between current position and the start of the line

There are some nice Alt key shortcuts in Linux as well.  You can map the alt key in OSX pretty easily to unlock these shortcuts.

  • Alt + l => Uncapitalize the next word that the cursor is under (If the cursor is in the middle of the the word it will capitalize the last half of the word).
  • Alt + u => Capitalize the word that the cursor is under
  • Alt + t => Swap words or arguments that the cursor is under with the previous
  • Alt + . => Paste the last word of the previous command
  • Alt + b => Move backward one word
  • Alt + f => Move forward one word
  • Alt + r => Undo any changes that have been done to the current command

Argument tricks

Argument tricks can help to grow the navigation capabilities that Bash shortcuts provide and can even further speed up your effectiveness in the terminal.  Below is a list of special arguments that can be passed to any command that can be expanded into various commands.

Repeating

  • !! => Repeat the previous (full) command
  • !foo => Repeat the most recent command that starts with ‘foo‘ (e.g. !ls)
  • !^ => Repeat the first argument of the previous command
  • !$ => Repeat the last argument of the previous command
  • !* => Repeat all arguments of last command
  • !:<number> => Repeat a specifically positioned argument
  • !:1-2 => Repeat a range of arguments

Printing

  • !$:p => Print out the word that !$ would substitute
  • !*:p => Print out the previous command except for the last word
  • !foo:p =>Print out the command that !foo would run

Special parameters

When writing scripts , there are a number of special parameters you can feed into the shell.  This can be convenient for doing lots of different things in scripts.  Part of the fun of writing scripts and automating things is discovering creative ways to fit together the various pieces of the puzzle in elegant ways.  The “special” parameters listed below can be seen as pieces of the puzzle, and can be very powerful building blocks in your scripts.

Here is a full reference from the Bash documentation

  • $* => Expand parameters. Expands to a single word for each parameter separated by IFS delimeter – think spaces
  • $@ => Expand parameters. Each parameter expand to a separate word, enclosed by “” –  think arrays
  • $# => Expand the number of parameters of a command
  • $? => Expand the exit status of the previous command
  • $$ => Expand the pid of the shell
  • $! => Expand the pid of the most recent command
  • $0 => Expand the name of the shell or script
  • $_ => Expand the last previous argument

Conclusion

There are some many crevices and cracks of Bash to explore, I keep finding new and interesting things about Bash that lead down new paths and help my skills grow.  I hope some of these tricks give you some ideas that can help and improve your own Bash style and workflows in the future.

Read More

Templated Nginx configuration with Bash and Docker

Shoutout to @shakefu for his Nginx and Bash wizardry in figuring a lot of this stuff out.  I’d like to take credit for this, but he’s the one who got a lot of it working originally.

Sometimes it can be useful to template Nginx files to use environment variables to fine tune and adjust control for various aspects of Nginx.  A recent example of this idea that I recently worked on was a scenario where I setup an Nginx proxy with a very bare bones configuration.  As part of the project, I wanted a quick and easy way to update some of the major Nginx configurations like the port it uses to listen for traffic, the server name, upstream servers, etc.

It turns out that there is a quick and dirty way to template basic Nginx configurations using Bash, which ended up being really useful so I thought I would share it.  There are a few caveats to this method but it is definitely worth the effort if you have a simple setup or a setup that requires some changes periodically.  I stuck the configuration into a Dockerfile so that it can be easily be updated and ported around – by using the nginx:alpine image as the base image the total size all said and done is around 16MB.  If you’re not interested in the Docker bits, feel free to skip them.

The first part of using this method is to create a simple configuration file that will be used to substitute in some environment variables.  Here is a simple template that is useful for changing a few Nginx settings.  I called it nginx.tmpl, which will be important for how the template gets rendered later.

events {}

http {
  error_log stderr;
  access_log /dev/stdout;

  upstream upstream_servers {
    server ${UPSTREAM};
  }

  server {
    listen ${LISTEN_PORT};
    server_name ${SERVER_NAME};
    resolver ${RESOLVER};
    set ${ESC}upstream ${UPSTREAM};

    # Allow injecting extra configuration into the server block
    ${SERVER_EXTRA_CONF}

    location / {
       proxy_pass ${ESC}upstream;
    }
  }
}

The configuration is mostly straight forward.  We are basically just using this configuration file and inserting a few templated variables denoted by the ${VARIABLE} syntax, which are just environment variables that get inserted into the configuration when it gets bootstrapped.  There are a few “tricks” that you may need to use if your configuration starts to get more complicated.  The first is the use of the ${ESC} variable.  Nginx uses the ‘$’ for its variables, which also is used by the template.  The extra ${ESC} basically just gives us a way to escape that $ so that we can use Nginx variables as well as templated variables.

The other interesting thing that we discovered (props to shakefu for this magic) was that you can basically jam arbitrary server block level configurations into an environment variable.  We do this with the ${SERVER_EXTRA_CONF} in the above configuration and I will show an example of how to use that environment variable later.

Next, I created a simple Dockerfile that provides some default values for some of the various templated variables.  The Dockerfile aslso copies the templated configuration into the image, and does some Bash magic for rendering the template.

FROM nginx:alpine

ENV LISTEN_PORT=8080 \
  SERVER_NAME=_ \
  RESOLVER=8.8.8.8 \
  UPSTREAM=icanhazip.com:80 \
  UPSTREAM_PROTO=http \
  ESC='$'

COPY nginx.tmpl /etc/nginx/nginx.tmpl

CMD /bin/sh -c "envsubst < /etc/nginx/nginx.tmpl > /etc/nginx/nginx.conf && nginx -g 'daemon off;' || cat /etc/nginx/nginx.conf"

There are some things to note.  First, not all of the variables in the template need to be declared in the Dockerfile, which means that if the variable isn’t set it will be blank in the rendered template and just won’t do anything.  There are some variables that need defaults, so if you ever run across that scenario you can just add them to the Dockerfile and rebuild.

The other interesting thing is how the template gets rendered.  There is a tool built into the shell called envsubst that substitutes the values of environment variables into files.  In the Dockerfile, this tool gets executed as part of the default command, taking the template as the input and creating the final configuration.

/bin/sh -c "envsubst < /etc/nginx/nginx.tmpl > /etc/nginx/nginx.conf

Nginx gets started in a slightly silly way so that daemon mode can be disabled (we want Nginx running in the foreground) and if that fails, the rendered template gets read to help look for errors in the rendered configuration.

&& nginx -g 'daemon off;' || cat /etc/nginx/nginx.conf"

To quickly test the configuration, you can create a simple docker-compose.yml file with a few of the desired environment variables, like I have below.

version: '3'
services:
  nginx_proxy:
    build:
      context: .
      dockerfile: Dockerfile
    # Only test the configuration
    #command: /bin/sh -c "envsubst < /etc/nginx/nginx.tmpl > /etc/nginx/nginx.conf && cat /etc/nginx/nginx.conf"
    volumes:
      - "./nginx.tmpl:/etc/nginx/nginx.tmpl"
    ports:
      - 80:80
    environment:
    - SERVER_NAME=_
    - LISTEN_PORT=80
    - UPSTREAM=test1.com
    - UPSTREAM_PROTO=https
    # Override the resolver
    - RESOLVER=4.2.2.2
    # The following would add an escape if it isn't in the Dockerfile
    # - ESC=$$

Then you can bring up Nginx server.

docker-compose up

The configuration doesn’t get rendered until the container is run, so to test the configuration only, you could add in a command in the docker-compose file that renders the configuration and then another command that spits out the rendered configuration to make sure it looks right.

If you are interested in adding additional configuration you can use the ${SERVER_EXTRA_CONF} as eluded to above.  An example of this extra configuration can be assigned to the environment variable.  Below is an arbitrary snippet that allows for connections to do long polling to Nginx, which basically means that Nginx will try to hold the connection open for existing connections for longer.

error_page 420 = @longpoll;
if ($arg_wait = "true") { return 420; }
}
location @longpoll {
# Proxy requests to upstream
proxy_pass $upstream;
# Allow long lived connections
proxy_buffering off;
proxy_read_timeout 900s;
keepalive_timeout 160s;
keepalive_requests 100000;

The above snipped would be a perfectly valid environment variable as far as the container is concerned, it will just look a little bit weird to the eye.

nginx proxy environment variables

That’s all I’ve got for now.  This minimal templated Nginx configuration is handy for testing out simple web servers, especially for proxies and is also nice to port around using Docker.

Read More

Fix the JenkinsAPI No valid crumb error

If you are working with the Python based JenkinsAPI library you might run into the No valid crumb was included in the request error.  The error below will probably look familiar if you’ve run into this issue.

Traceback (most recent call last):
 File "myscript.py", line 47, in <module>
 deploy()
 File "myscript.py", line 24, in deploy
 jenkins.build_job('test')
 File "/usr/local/lib/python3.6/site-packages/jenkinsapi/jenkins.py", line 165, in build_job
 self[jobname].invoke(build_params=params or {})
 File "/usr/local/lib/python3.6/site-packages/jenkinsapi/job.py", line 209, in invoke
 allow_redirects=False
 File "/usr/local/lib/python3.6/site-packages/jenkinsapi/utils/requester.py", line 143, in post_and_confirm_status
 response.text.encode('UTF-8')
jenkinsapi.custom_exceptions.JenkinsAPIException: Operation failed. url=https://jenkins.example.com/job/test/build, data={'json': '{"parameter": [], "statusCode": "303", "redirectTo": "."}'}, headers={'Content-Type': 'application/x-www-form-urlencoded'}, status=403, text=b'<html>\n<head>\n<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>\n<title>Error 403 No valid crumb was included in the request</title>\n</head>\n<body><h2>HTTP ERROR 403</h2>\n<p>Problem accessing /job/test/build. Reason:\n<pre> No valid crumb was included in the request</pre></p><hr><a href="http://eclipse.org/jetty">Powered by Jetty:// 9.4.z-SNAPSHOT</a><hr/>\n\n</body>\n</html>\n'

It is good practice to enable additional security in Jenkins by turning on the “Prevent Cross Site Forgery exploits” option in the security settings, so if you see this error it is a good thing.  The below example shows this security feature in Jenkins.

enable xss protection

The Fix

This error threw me off at first, but it didn’t take long to find a quick fix.  There is a crumb_requester class in the jenkinsapi that you can use to create the crumbed auth token.  You can use the following example as a guideline in your own code.

from jenkinsapi.jenkins import Jenkins
from jenkinsapi.utils.crumb_requester import CrumbRequester

JENKINS_USER = 'user'
JENKINS_PASS = 'pass'
JENKINS_URL = 'https://jenkins.example.com'

# We need to create a crumb for the request first
crumb=CrumbRequester(username=JENKINS_USER, password=JENKINS_PASS, baseurl=JENKINS_URL)

# Now use the crumb to authenticate against Jenkins
jenkins = Jenkins(JENKINS_URL, username=JENKINS_USER, password=JENKINS_PASS, requester=crumb)

...

The code looks very similar to creating a normal Jenkins authentication object, the only difference being that we create and then pass in a crumb for the request, rather than just a username/password combination.  Once the crumbed authentication object has been created, you can continue writing your Python code as you would normally.  If you’re interested in learning more about crumbs and CSRF you can find more here, or just Google for CSRF for more info.

This issue was slightly confusing/annoying, but I’d rather deal with an extra few lines of code and know that my Jenkins server is secure.

Read More

Remote Jenkins builds using Github auth

Having the ability to call Jenkins jobs remotely is pretty slick and adds some extra flexibility and allows for some interesting applications.  For example, you could use remote builds to call a script from a chat app or from some other web application.  I have chosen to write a quick bash script as a proof of concept, but this could easily be extended or written in a different language via one of its language specific libraries.

The instructions for the method I am using assume that you are using the Jenkins freestyle build as I haven’t experimented much yet with pipelines for remote builds yet.

The first step is to enable remote builds for the Jenkins job that will be triggered.  There is an option in the job for “Trigger builds remotely” which allows the job to be called from a script.

trigger remote builds

The authentication token can be any arbitrary string you choose.  Also note the URL below, you will need that later as part of the script to call this job.

With the authentication token configured and the Jenkins URL recorded, you can begin writing the script.  The first step is to populate some variables for kicking off the job.  Below is an example of how you might do this.

jenkins_url="https://jenkins.example.com"
job_name="my-jenkins-job"
job_token="xxxxx"
auth="username:token"
my_repo="some_git_repo"
git_tag="abcd123"

Be sure to fill in these variables with the correctly corresponding values.  job_token should correspond to the string you entered above in the Jenkins job, auth should correspond to your github username/token combination.  If you are not familiar with Github tokens you can find more information about setting them up here.

As part of the script, you will want to create a Jenkins “crumb” using your Github credentials that will be used to prevent cross-site scripting attacks.  Here’s what the creation of the crumb looks like (borrowed from this Stackoverflow post).

crumb=$(curl -s 'https://'auth'@jenkins.example.com/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,":",//crumb)')

Once you have your variables configured and your crumb all set up, you can test out the Jenkins job.

curl -X POST -H "$crumb" $jenkins_url/job/$job_name/build?token=$job_token \
  --user $auth \
  --data-urlencode json='
  {"parameter":
    [
      {"name":"parameter1", "value":"test1"},
      {"name":"parameter2", "value":"test2"},
      {"name":"git_repo", "value":"'$my_repo'"},
      {"name":"git_tag", "value":"'$git_tag'"}
    ]
  }'

In the example job above, I am using several Jenkins parameters as part of the build.  The json name values correspond to the parameters.  Notice that I am using variables for a few of the values above, make sure those variables are wrapped in singe quotes to correctly escape the json.  The syntax for variables is slightly different but allows for some additional flexibility in the job configuration and also allows the script to be called with dynamic values.

If you call this script now, it should kick off a Jenkins job for you with all of the values you have provided.

Read More