Exploring Linux Terminals

I have been experimenting with Linux on a recently acquired Lenovo X1 Carbon and as part of the process I have been exploring and testing out various productivity tools, including terminal emulators.

The first thing I discovered in this process is that there’s a lot of terminals out there. And they’re all slightly different, and in my experience, almost none of them were able to do all of the tweaks I like.

Here is the list of terminals I have tested out (so far).

As you can see, that is a bunch of terminals. To keep things short, in my exploration and testing, the best 3 I found were as follows.

Alacritty – 3rd

It is fast, defaults are great, works perfectly with the tiling window managers. This one would would probably have been first on the list if it was able to provide a blinking cursor but as I found with many of the options there seemed to almost always be a gotcha.

A few high points for this terminal, written in Rust, it uses the GPU to offload rendering, cross platform, and maybe my favorite feature, the configuration file is yaml based. Docs are also good and the community is really taking off so I have a strong feeling this one will continue improving.

urxvt – 2nd

The biggest problem with this terminal for me is that it is painful to configure. There are no preferences configured out of the box, which to some is preferred, but also, digging through old blog posts and perl scripts is not how I like to spend my time.

This terminal is second on the list because it was the only other terminal that was able to do all of the small little tweaks and adjustments that suit my preferences. And it is fast and light weight, which are good things.

Here is the configuration I ended up with if interested.

Kitty – 1st

This terminal was a clear winner. It was able to do all of my custom tweaks and settings and because it has nice defaults I only needed to add about 10 lines of extra configuration.

This terminal has a lot of other stuff going for it. It is written in Rust, which makes it fast, it uses the GPU to offload rendering, which also makes it fast, easy to configure, and for the most part just works. Only gotcha I found was that I needed to explicitly turn on copy to selection in my configuration, but that was easy.

Here is the configuration I ended up with if interested.

Conclusion

I plan on keeping this list updated to some extent as I find more Terminals to try out. As you can see, there are many different options and seem to be more and more all the time.

Your experience may differ so obviously take these musings with a pinch of salt, and please do look through the various options and try things out to see if they will work for you. That said, I do think I have a specific enough use case that these recommendations should be helpful in guiding most users.

Here is the repo with all of my various configurations if you want to check something out. There were a few terminal configs that didn’t end up there just because they were too minimal or I didn’t like them.

Read More

Idempotent Shell Scripts with Terraform

One challenge when dealing with Terraform is keeping things clean and repeatable. My current favorite approach to accomplish this task using shell scripts is by using a combination of null_resources and triggers to control when scripts should be updated. These controls combined with Terraform provisioners and template_files provide a nice flexible way to deal with otherwise potentially messy scripts.

I like to provide more than one trigger so that they can be recomputed when either a variable that gets passed into the script changes, or, the script itself changes. This trick is handy when you need to bend Terraform into doing something that it usually doesn’t handle, like some startup or bootstrap process.

Below is the full example with templated scripts and triggers in Terraform v0.11. The same logic should also work with Terraform v0.12 with minimal changes.

data "template_file" "cool_script" {
  template = "${file("${path.module}/script.sh")}"

  vars {
    my_cool_var = "${var.my_cool_var}"
  }
}

resource "null_resource" "script" {

  # Trigger when the script when variables change
  triggers = {
    my_trigger = "${var.my_cool_var}"
    script_sha = "${sha256(file("${path.module}/script.sh"))}"
  }

  provisioner "local-exec" {
    command   = "${data.template_file.cool_script.rendered}"
    interpreter = ["/bin/bash", "-c"]
  }
}

You can provide as many variables to the template_file as the script needs and any time those get updated, the null_resource will pick these changes up and update/rerun your script for you. Notice that the provisioner in the null_resource is basically set up to call bash against the rendered script that we created, using the values in the vars.

If you call the script again, nothing should change because we already computed the SHA of the rendered script and told Terraform to keep track of that state. The below example is a simple way for telling Terraform to keep track of changes to the script/template.

script_sha = "${sha256(file("${path.module}/script.sh"))}"

Once the script is updated, the next time the script gets called, its values and output should update accordingly. Likewise, since we are also using a var as a trigger, when that changes the script will also be updated.

Terraform is flexible enough to allow us to do things like this because often times there are situations and edge cases where Terraform can’t really perform some actions. The above example is a common workaround for provisioning resources that either don’t have an API that Terraform can tap into, or are just tasks that are only handled via some startup or bootstrap script/process.

Read More

Bash Quick Substitution

big brain
Big Brain Time

One thing that I love about bash is that there is never a shortage of new tips and tricks to learn. I have been using bash for over 10 years now and just stumbled on this little trick.

This one (as the title implies) allows you to quickly substitute a string into the previous command and rerun the command with the substitution.

Quick substitution is officially part of the Bash Event Designators mechanism and is a great way to fix a typo from a previous command. Below is an example.

# Simple example to highlight substitutions
echo foo

# This will replace the string "foo" with "bar" and rerun the last command
^foo^bar

This shorthand notation is great for most use cases, with the exception of needing to replace multiple instances of a given string. Luckily that is easily addressed with the Event Designators expanded substitution syntax, shown below.

# This will substitute ALL occurrences of foo in the previous command
!!:gs/foo/bar/

# Slightly different syntax allows you to do the same thing in ZSH
^foo^bar^:G

The syntax is slightly more complicated in the first example but should be familiar enough to anyone that has used sed and/or vim substitutions, and the second example is almost identical to the shorthand substitution.

fc

Taking things one step further, we can actually edit the previous command to fix anything more than a typo of different argument. fc is actually a bash builtin function so it is available almost everywhere.

fc is especially useful for dealing with very long, complicated commands.

# Oops, we messed this up
echo fobarr | grep bar

# To fix it, just open the above in your default editor
fc

# when you write and quit the file it will put the contents into your current command

There are many great tutorials available so I would recommend looking around to see all the options and get more ideas.

Read More

Untangling CLI Text Processing Tools

Update (2/7/2020): Added jc

The landscape of command line driver text manipulation and processing tools is somewhat large and confusing, with more and more tools emerging all the time. Because I am having trouble keeping them all in my head, I decided to make a little reference guide to help remember which tool to choose for the correct task at hand.

  • jq
  • jc
  • yq (both python and go versions)
  • oq
  • xq
  • hq
  • jk
  • ytt
  • configula
  • jsonnet
  • sed (for everything else)

jq

Reach for jq first when you need to do any kind of processing with JSON files.

From the website, “jq is like sed for JSON data – you can use it to slice and filter and map and transform structured data with the same ease that sedawkgrep and friends let you play with text.”

sudo apt-get install jq
# or
brew install jq

# consume json and output as unchanged json
curl 'https://api.github.com/repos/stedolan/jq/commits?per_page=5' | jq '.'

jc

This project looks very interesting and is a refreshing thing to see in the Linux world. JC is basically a way to create structured objects (ala PowerShell) as JSON output from running various Linux commands. And from the GitHub repo, ” This tool serializes the output of popular gnu linux command line tools and file types to structured JSON output. This allows piping of output to tools like jq”.

Transforming data into structured objects can massively simplify interacting with them by pairing the output with jq to interact with.

pip3 install --upgrade jc
df | jc --df -p
[
  {
    "filesystem": "devtmpfs",
    "1k_blocks": 1918816,
    "used": 0,
    "available": 1918816,
    "use_percent": 0,
    "mounted_on": "/dev"
  },
  {
    "filesystem": "tmpfs",
    "1k_blocks": 1930664,
    "used": 0,
    "available": 1930664,
    "use_percent": 0,
    "mounted_on": "/dev/shm"
  },
  ...
]

yq (Python) (Go)

This tool can be confusing because there is both a Python version and a Go version. On top of that, the Python version includes its own version of xq, which is different than the standalone xq tool.

The main differences between the Python and Go version is that the Python version can deal with both yaml and xml while the Go version is meant to be used as a command line tool to deal with only yaml.

pip install yq
cat input.yml | yq -y .foo.bar

oq

From the website, oq is “A performant, portable jq wrapper thats facilitates the consumption and output of formats other than JSON; using jq filters to transform the data”.

The claim to fame that oq has is that it is very similar to jq but works better with other data formats, including xml and yaml. For example, you can read in some xml (and others), apply some filters, and output to yaml (and others). This flexibility makes oq a good option if you need to deal with different data formats oustide of JSON.

snap install oq
# or
brew tap blacksmoke16/tap && brew install oq

# consume json and output xml
echo '{"name": "Jim"}' | oq -o xml .

xq

From the Github page, “Apply XPath expressions to XML, like jq does for JSONPath and JSON”.

The coolest use case I have found for xq so far is taking in an xml file and outputting it into a json file, which surprising I haven’t found another tool that can do this (oc authors say there are plans to do this in the future). The simplest example is to curl a page, pipe it through xq to change it to json and then pipe it again and use jq to manipulate the data.

pip install yq
curl -s https://mysite.xml | xq .

hq

Like xq (and jq) but for html parsing. This tool is handy for manipulating html in the same way you would xml or json.

pip install hq
cat /path/to/file.html | hq '`Hello, ${/html/head/title}!`'

jk

This is a newer tool, with a slightly different approach aimed at helping to automate configurations, especially for things like Kubernetes but should work with most structured data.

This tool works with json, yaml and hcl and can be used in conjunction with Javascript, making it an interesting option.

curl -Lo jk https://github.com/jkcfg/jk/releases/download/0.3.1/jk-darwin-amd64
chmod +x jk
sudo mv jk /usr/local/bin/

// alice.js
const alice = {
  name: 'Alice',
  beverage: 'Club-Mate',
  monitors: 2,
  languages: [
    'python',
    'haskell',
    'c++',
    '68k assembly', // Alice is cool like that!
  ],
};

// Instruct to write the alice object as a YAML file.
export default [
  { value: alice, file: `developers/${alice.name.toLowerCase()}.yaml` },
];

jk generate -v alice.js

ytt

This is basically a simplified templating language that only intends to deal with yaml. The approach the authors took was to create yaml templates and sanbdox/embed Python into the templating engine, allowing users to call on the power of Python inside of their templates.

The easiest way to play around with ytt if you don’t want to clone the repo is to try out the online playground.

curl -Lo ytt https://github.com/k14s/ytt/releases/download/v0.25.0/ytt-darwin-amd64
chmod +x ytt
sudo mv ytt /usr/local/bin/

https://github.com/k14s/ytt.git && cd ytt
ytt -f examples/playground/example-demo/

configula

From the GitHub page, ” Configula is a configuration generation language and processor. It’s goal is to make the programmatic definition of declarative configuration easy and intuitive”.

Similar in some ways to ytt, but instead of embedding Python into the yaml template file, you create a .py file and then render the py file into yaml using the Configula command line tool.

git clone https://github.com/brendandburns/configula
cd configula

# tiny.py
# Define a YAML object where the 'foo' field has the value of evaluating 1 + 2 (e.g. 3)
my_obj = foo: !~ 1 + 2
my_obj.render()

./configula examples/tiny.py

jsonnet

Jsonnet bills itself as a “data templating language for app and tool developers”. This tool was originally created by folks working at Google, and has been around for quite some time now. For some reason always seems to fly underneath the radar but it is super powerful.

This tool is a superset of JSON and allows you to add conditionals, loops and other functions available as part of its standard library. Jsonnet can render itself into json and yaml output.

pip install jsonnet
# or
brew install jsonnet

// example.jsonnet
{
  person1: {
    name: "Alice",
    welcome: "Hello " + self.name + "!",
  },
  person2: self.person1 { name: "Bob" },
}

jsonnet -S example.jsonnet

sed

For (pretty much) everything else, there is Sed. Sed, short for stream editor, has been around forever and is basically a Swiss army knife for manipulating text, and if you have been using *nix for any length of time you have more than likely come across this tool before. From their docs, Sed is “a stream editor is used to perform basic text transformations on an input stream”.

The odds are good that Sed will likely do what you’re looking for if you can’t use one of the aforementioned tools.

Read More

Automatically set Terraform environment variables

automatically load environment variables

If you have spent any time managing infrastructure you have probably run into the issue of needing to set environment variables in order to connect to various resources.

For example, the AWS Terraform provider allows you to automatically source local environment variables, which solves the issue of placing secrets in places they should be, ie. writing the keys into configurations or state. This use case is pretty straight forward, you can just set the environment variables once and everything will be able to connect.

The problem arises when environments need to be changed frequently or other configurations requiring secrets or tokens to connect to other resources need to be changed.

There are various tools available to solve the issue of managing and storing secrets for connecting to AWS including aws-vault and aws-env. The main issue is that these tools are opinionated and only work with AWS and aren’t designed to work with Terraform.

Of course this problem isn’t specific to Terraform, but the solution I have discovered using ondir is generic enough that it can be applied to Terraform as well as any other task that involves setting up environment variables or running shell commands whenever a specific directory is changed to or left.

With that said, the ondir tool in combination with Terragrunt, gives you a set of tools that vastly improves the Terraform experience. There is a lot of Terragrunt reference material already so I won’t discuss it much here. The real power of my solution comes from combining ondir with Terragrunt to enhance how you use Terraform.

Note: Before getting into more details of ondir, it is worth mentioning that there are several other similar tools, the most well known of which is called direnv. Direnv provides a few other features and a nice library for completing actions so it is definitely work checking out if you are not looking for a solution specific to Terraform/Terragrunt. Other tools in the space include autoenv, smartcd and a few others.

Direnv should technically work to solve the problem, but when managing and maintaining large infrastructures, I have found it to be more tedious and cumbersome to manage.

Ondir

From the Github page, Ondir is a small program to automate tasks specific to certain directories, so isn’t designed specifically to set environment variables but makes a perfect fit doing so.

Install ondir

# OSX
brew install ondir
# Ubuntu
apt install ondir

Configure ondir

After installing, add the following lines to your ~/.zshrc (or these lines if you are using bash) and restart your shell.

val_ondir() {
  eval "`ondir \"$OLDPWD\" \"$PWD\"`"
}
chpwd_functions=( eval_ondir $chpwd_functions )

Ondir leverages an ~/.ondirrc file to configure the tool. This file basically consists of enter and leave directives for performing actions when specific directories are entered and left. To make this tool useful with Terragrunt, I have setup a simple example below for exporting a database password given a path that gets switched into (if the path ends in stage/database), which is defined in the ~/.ondirrc file. One very nice feature of ondir is that it allows regular expressions to be used to define the enter/leave directives, which makes allows for complex paths to be defined.

enter .+(stage\/database)
    echo "Setting database password"
    export TF_VAR_db_password=mypassword
leave .+(stage\/database)
    unset TF_VAR_db_password

With this logic, the shell should automatically export a database password for Terraform to use whenever you switch to the directory ending in stage/database.

cd ~/project/path/to/stage/database
# Check your environment for the variable
env | grep TF_VAR
TF_VAR_db_password=mypassword

cd ~
# When you leave the directory it should get unset
env | grep TF_VAR

In a slightly more complicated example, you may have a databases.sh script containing more logic for setting up or exporting variables or otherwise setting up environments.

#!/usr/bin/env bash

db_env=$(pwd | awk -F '/' '{print $(NF-1)"/"$NF}')

if [[ "$db_env" == "stage/database" ]]; then
    # do stuff
    export TF_VAR_db_password="mypassword"

else [[ "$db_env" == "prod/database" ]]; then
    # do other stuff
    export TF_VAR_db_password="myotherpassword"
fi

Conclusion

That’s pretty much. Now you can automatically set environment variables for Terraform to use, based on the directory we are currently and no longer need to worry about setting and unsetting environment variables manually.

I HIGHLY recommend checking out direnv as well. It offers slightly different functionality, and some neat features which in some cases may actually be better for specific tasks than ondir. For example, the direnv stdlib has some great helpers to help configure environments cleanly.

For me, using ondir for managing my Terragrunt configs and direnv for managing other projects makes the most sense.

Read More