Bash tricks

bash

Bash is great.  As I have discovered over the years, Bash contains many different layers, like a good movie or a fine wine.  It is fun to explore and expose these different layers and find uses for them.  As my experience level has increased, I have (slowly) uncovered a number of these features of Bash that make life easier and worked to incorporate them in different ways into my own workflows and use them within my own style.

The great thing about fine arts, Bash included, is that there are so many nuances and for Bash, a huge number of features and uses, which makes the learning process that much more fun.

It does take a lot of time and practice to get used to the syntax and to become effective with these shortcuts.  I use this page as a reference whenever I think of something that sounds like it would be useful and could save time in a script or a command.  At first, it may take more time to look up how to use these shortcuts, but eventually, with practice and drilling will become second nature and become real time savers.

Shell shortcuts

Navigating the Bash shell is easy to do.  But it takes time to learn how to do well.  Below are a number of shortcuts that make the navigation process much more efficient.  I use nearly all of the shortcuts daily (except Ctrl + t and Ctrl + xx, which I only recently discovered).  In a similar vein, I wrote a separate post long ago about setting up CLI shortcuts on iterm that can further augment the capabilities of the CLI.

This is a nice reference with more examples and features

  • Ctrl + a => Return to the start of the command you’re typing
  • Ctrl + e => Go to the end of the command you’re typing
  • Ctrl + u => Cut everything before the cursor to a special clipboard
  • Ctrl + k => Cut everything after the cursor to a special clipboard
  • Ctrl + y => Paste from the special clipboard that Ctrl + u and Ctrl + k save their data to
  • Ctrl + t => Swap the two characters before the cursor (you can actually use this to transport a character from the left to the right, try it!)
  • Ctrl + w => Delete the word / argument left of the cursor
  • Ctrl + l => Clear the screen
  • Ctrl + _ => Undo previous key press
  • Ctrl + xx => Toggle between current position and the start of the line

Argument tricks

Argument tricks can help to grow the navigation capabilities that Bash shortcuts provide and can even further speed up your effectiveness in the terminal.  Below is a list of special arguments that can be passed to any command that can be expanded into various commands.

Repeating

  • !! => Repeat the previous (full) command
  • !foo => Repeat the most recent command that starts with ‘foo‘ (e.g. !ls)
  • !^ => Repeat the first argument of the previous command
  • !$ => Repeat the last argument of the previous command
  • !* => Repeat all arguments of last command
  • !:<number> => Repeat a specifically positioned argument
  • !:1-2 => Repeat a range of arguments

Printing

  • !$:p => Print out the word that !$ would substitute
  • !*:p => Print out the previous command except for the last word
  • !foo:p =>Print out the command that !foo would run

Special parameters

When writing scripts , there are a number of special parameters you can feed into the shell.  This can be convenient for doing lots of different things in scripts.  Part of the fun of writing scripts and automating things is discovering creative ways to fit together the various pieces of the puzzle in elegant ways.  The “special” parameters listed below can be seen as pieces of the puzzle, and can be very powerful building blocks in your scripts.

Here is a full reference from the Bash documentation

  • $* => Expand parameters. Expands to a single word for each parameter separated by IFS delimeter – think spaces
  • $@ => Expand parameters. Each parameter expand to a separate word, enclosed by “” –  think arrays
  • $# => Expand the number of parameters of a command
  • $? => Expand the exit status of the previous command
  • $$ => Expand the pid of the shell
  • $! => Expand the pid of the most recent command
  • $0 => Expand the name of the shell or script
  • $_ => Expand the last previous argument

Conclusion

There are some many crevices and cracks of Bash to explore, I keep finding new and interesting things about Bash that lead down new paths and help my skills grow.  I hope some of these tricks give you some ideas that can help and improve your own Bash style and workflows in the future.

Read More

Remote Jenkins builds using Github auth

Having the ability to call Jenkins jobs remotely is pretty slick and adds some extra flexibility and allows for some interesting applications.  For example, you could use remote builds to call a script from a chat app or from some other web application.  I have chosen to write a quick bash script as a proof of concept, but this could easily be extended or written in a different language via one of its language specific libraries.

The instructions for the method I am using assume that you are using the Jenkins freestyle build as I haven’t experimented much yet with pipelines for remote builds yet.

The first step is to enable remote builds for the Jenkins job that will be triggered.  There is an option in the job for “Trigger builds remotely” which allows the job to be called from a script.

trigger remote builds

The authentication token can be any arbitrary string you choose.  Also note the URL below, you will need that later as part of the script to call this job.

With the authentication token configured and the Jenkins URL recorded, you can begin writing the script.  The first step is to populate some variables for kicking off the job.  Below is an example of how you might do this.

jenkins_url="https://jenkins.example.com"
job_name="my-jenkins-job"
job_token="xxxxx"
auth="username:token"
my_repo="some_git_repo"
git_tag="abcd123"

Be sure to fill in these variables with the correctly corresponding values.  job_token should correspond to the string you entered above in the Jenkins job, auth should correspond to your github username/token combination.  If you are not familiar with Github tokens you can find more information about setting them up here.

As part of the script, you will want to create a Jenkins “crumb” using your Github credentials that will be used to prevent cross-site scripting attacks.  Here’s what the creation of the crumb looks like (borrowed from this Stackoverflow post).

crumb=$(curl -s 'https://'auth'@jenkins.example.com/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,":",//crumb)')

Once you have your variables configured and your crumb all set up, you can test out the Jenkins job.

curl -X POST -H "$crumb" $jenkins_url/job/$job_name/build?token=$job_token \
  --user $auth \
  --data-urlencode json='
  {"parameter":
    [
      {"name":"parameter1", "value":"test1"},
      {"name":"parameter2", "value":"test2"},
      {"name":"git_repo", "value":"'$my_repo'"},
      {"name":"git_tag", "value":"'$git_tag'"}
    ]
  }'

In the example job above, I am using several Jenkins parameters as part of the build.  The json name values correspond to the parameters.  Notice that I am using variables for a few of the values above, make sure those variables are wrapped in singe quotes to correctly escape the json.  The syntax for variables is slightly different but allows for some additional flexibility in the job configuration and also allows the script to be called with dynamic values.

If you call this script now, it should kick off a Jenkins job for you with all of the values you have provided.

Read More

Backup Route 53 zones

We have all heard about DNS catastrophes.  I just read about horror story on reddit the other day, where an Azure root DNS zone was accidentally deleted with no backup.  I experienced a similar disaster a few years ago – a simple DNS change managed to knock out internal DNS for an entire domain, which contained hundreds of records.  Reading the post hit close to home, uncovering some of my own past anxiety, so I began poking around for solutions.  Immediately, I noticed that backing up DNS records is usually skipped over as part of the backup process.  Folks just tend to never do it, for whatever reason.

I did discover, though, that backing up DNS is easy.  So I decided to fix the problem.

I wrote a simple shell script that dumps out all Route53 zones for a given AWS account to a json file, and uploads the zones to an S3 bucket.  The script is a handful lines, which is perfect because it doesn’t take much effort to potentially save your bacon.

If you don’t host DNS in AWS, the script can be modified to work for other DNS providers (assuming they have public API’s).

Here’s the script:

#!/usr/bin/env bash

set -e

# Dump route 53 zones to a text file and upload to S3.

BACKUP_DIR=/home/<user>/dns-backup
BACKUP_BUCKET=<bucket>
# Use full paths for cron
CLIPATH="/usr/local/bin"

# Dump all zones to a file and upload to s3
function backup_all_zones () {
  local zones
  # Enumerate all zones
  zones=$($CLIPATH/aws route53 list-hosted-zones | jq -r '.HostedZones[].Id' | sed "s/\/hostedzone\///")
  for zone in $zones; do
  echo "Backing up zone $zone"
  $CLIPATH/aws route53 list-resource-record-sets --hosted-zone-id $zone > $BACKUP_DIR/$zone.json
  done

  # Upload backups to s3
  $CLIPATH/aws s3 cp $BACKUP_DIR s3://$BACKUP_BUCKET --recursive --sse
}

# Create backup directory if it doesn't exist
mkdir -p $BACKUP_DIR
# Backup up all the things
time backup_all_zones

Be sure to update the <user> and <bucket> in the script to match your own environment settings.  Dumping the DNS records to json is nice because it allows for a more programmatic way of working with the data.  This script can be run manually, but is much more useful if run automatically.  Just add the script to a cronjob and schedule it to dump DNS periodically.

For this script to work, the aws cli and jq need to be installed.  The installation is skipped in this post, but is trivial.  Refer to the links for instructions.

The aws cli needs to be configured to use an API key with read access from Route53 and the ability to write to S3.  Details are skipped for this step as well – be sure to consult the AWS documentation on setting up IAM permissions for help with setting up API keys.  Another, simplified approach is to use a pre-existing key with admin credentials (not recommended).

Read More

Backing up Jenkins configurations to S3

If you have worked with Jenkins for any extended length of time you quickly realize that the Jenkins server configurations can become complicated.  If the server ever breaks and you don’t have a good backup of all the configuration files, it can be extremely painful to recreate all of the jobs that you have configured.  And most recently if you have started using the Jenkins workflow libraries, all of your custom scripts and coding will disappear if you don’t back it up.

Luckily, backing up your Jenkins job configurations is a fairly simple and straight forward process.  Today I will cover one quick and dirty way to backup configs using a Jenkins job.

There are some AWS plugins that will backup your Jenkins configurations but I found that it was just as easy to write a little bit of bash to do the backup, especially since I wanted to backup to S3, which none of the plugins I looked at handle.  In genereal, the plugins I looked at either felt a little bit too heavy for what I was trying to accomplish or didn’t offer the functionality I was looking for.

If you are still interested in using a plugin, here are a few to check out:

Keep reading if none of the above plugins look like a good fit.

The first step is to install the needed dependencies on your Jenkins server.  For the backup method that I will be covering, the only tools that need to be installed are aws cli, tar and rsync.  Tar and rsync should already be installed and to get the aws cli you can download and install it with pip, from the Jenkins server that has the configurations you want to back up.

pip install awscli

After the prerequisites have been installed, you will need to create your Jenkins job.  Click New Item -> Freestyle and input a name for the new job.

jenkins job name

Then you will need to configure the job.

The first step will be figuring out how often you want to run this backup.  A simple strategy would be to backup once a day.  The once per day strategy is illustrated below.

backup periodically

Note the ‘H’ above means to randomize when the job runs over the hour so that if other jobs were configured they would try to space out the load.

The next step is to backup the Jenkins files.  The logic is all written in bash so if you are familiar it should be easy to follow along.

# Delete all files in the workspace
rm -rf *

# Create a directory for the job definitions
mkdir -p $BUILD_ID/jobs

# Copy global configuration files into the workspace
cp $JENKINS_HOME/*.xml $BUILD_ID/

# Copy keys and secrets into the workspace
cp $JENKINS_HOME/identity.key.enc $BUILD_ID/
cp $JENKINS_HOME/secret.key $BUILD_ID/
cp $JENKINS_HOME/secret.key.not-so-secret $BUILD_ID/
cp -r $JENKINS_HOME/secrets $BUILD_ID/

# Copy user configuration files into the workspace
cp -r $JENKINS_HOME/users $BUILD_ID/

# Copy custom Pipeline workflow libraries
cp -r $JENKINS_HOME/workflow-libs $BUILD_ID

# Copy job definitions into the workspace
rsync -am --include='config.xml' --include='*/' --prune-empty-dirs --exclude='*' $JENKINS_HOME/jobs/ $BUILD_ID/jobs/

# Create an archive from all copied files (since the S3 plugin cannot copy folders recursively)
tar czf jenkins-configuration.tar.gz $BUILD_ID/

# Remove the directory so only the tar.gz gets copied to S3
rm -rf $BUILD_ID

Note that I am not backing up the job history because the history isn’t important for my uses.  If the history IS important, make sure to add a line to backup those locations.  Likewise, feel free to modify and/or update anything else in the script if it suits your needs any better.

The last step is to copy the backup to another location.  This is why we installed aws cli earlier.  So here I am just uploading the tar file to an S3 bucket, which is versioned (look up how to configure bucket versioning if you’re not familiar).

export AWS_DEFAULT_REGION="xxx"
export AWS_ACCESS_KEY_ID="xxx"
export AWS_SECRET_ACCESS_KEY="xxx"

# Upload archive to S3
echo "Uploading archive to S3"
aws s3 cp jenkins-configuration.tar.gz s3://<bucket>/jenkins-backup/

# Remove tar.gz after it gets uploaded to S3
rm -rf jenkins-configuration.tar.gz

Replace the AWS_DEFAULT_REGION with the region where the bucket lives (typically us-east-1), make sure to update the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to use an account with access to write to AWS S3 (not covered here).  The final thing to note, <bucket> should be replaced to use your own bucket.

The backup process itself is usually pretty fast unless the Jenkins server has a massive amount of jobs and configurations.  Once you have configured the job, feel free to run it once to test if it works.  If the job worked and returns as completed, go check your S3 bucket and make sure the tar.gz file was uploaded.  If you are using versioning there should just be one file, and if you choose the “show versions” option you will see something similar to the following.

s3 backup

If everything went okay with your backup and upload to s3 you are done.  Common issues configuring this backup method are choosing the correct AWS bucket, region and credentials.  Also, double check where all of your Jenkins configurations live in case there aren’t in a standard location.

Read More

Intro to Hyperterm

If you haven’t heard about it yet, Hyperterm (not to be confused with hyperterminal) is a cool new project that brings javascript to the terminal.  Basically, Hyperterm allows for a wide variety of customization and extension to be added to the terminal, yet doesn’t add extra bloat and keeps things fast.  For those who don’t know, Hyperterm is based on the electron project which leverages nodejs to build desktop applications that are cross platform.

At its simplest, Hyperterm is a drop in replacement for other shells, like iterm2 or the default terminal app that comes packaged with most OS’s.  Since Hyperterm is built on top of node (via Electron) it is by default cross platform so works on  Mac and Linux and Windows soon.  Obviously this is a win because you can port your configuration to different platforms and don’t need to reconfigure anything, and can also store your configuration in source control so that if your machine ever dies or you get a new one, you have a nice place to pick things up again, which is pretty slick.

If you know javascript, you can already start hacking on the look and feel of Hyperterm, the Chromium browser tools are literally built into it (cmd+option+i).

Installation

To get started, head over to the official Hyperterm website and download the latest release.

Once that is done and you go through the installation process you are ready to get started.  Just fire up Hyperterm and you are good to go.

hyperterm

The stock Hyperterm is definitely usable.  The real power though, comes from the flexibility and design of the plugin system and configuration files which makes customization really easy to get going with and really powerful.

Configuration

Hyperterm uses its own configuration file to extend the basic functionality.  The docs are a great resource for learning more about customization and configuration.

The process of changing themes or adding additional functionality is pretty straight forward.  All the plugins that Hyperterm uses are just npm modules, so can be installed and managed via npm.  So for example, to change the default theme, you would open up your ~/.hyperterm.js file.

Look for the “plugins” section.

plugins: [],

Add the desired plugin.

plugins: [
    'hyperterm-atom-dark',
    'hyperline'
],

And then reload hyperterm to pick up the new configuration by pressing (Cmd+Shift+R) or by clicking View -> Reload.  You should notice the new theme right away.  A nice status line should show up at the bottom of the terminal because of the ‘hyperline’ package, and there was practically no time spent enabling the functionality, which is a big win in my opinion.

For more ideas, definitely go check out the awesome-hyperterm project.  This repo is a great place to find out more about hyperterm and other cool projects that are related.  The official docs are also a great resource for getting started as well as finding some ideas.

Finally, you can also run,

npm search hyperterm

To get a full listing of npm projects with hyperterm in their name for even more ideas.  Outside of the plugins, you can easily hack on the configuration file itself to test out how things work.  Again, the config is just javascript so if you know JS it is easy to get started modifying things.

Additionally, you can tweak the configuration by hand to customize things like font sizes, colors, cursor, etc. without having to install or use any plugins.  The process to customize these values is similar to installing plugins, just pop open the ~/.hypertem.js file, make any adjustments, then reload the terminal and you should be good to go.

Conclusion

The Hyperterm project is still very new but it is already capable of being the default terminal.  As the project grows in popularity, there will be more and more options for customization and the terminal itself will continue to improve.  It is exciting to see something new in the terminal emulator space because there are so few options.  It will be cool to see what new developments are in the works for the project.

It is definitely hard to adjust to something new but it is also good to get out of your comfort zone sometimes as well.  There are lots of things to poke around at and plugins to try out with Hyperterm.

I can’t remember the last time I had this much fun when I was fiddling around with terminal settings.  So at the very least, if you don’t switch full time to Hyperterm, give it a try and see if it is a good fit.

Read More