Python virtualenv Notes

Virtual environments are really useful for maintaining different packages and for separating different environments without getting your system messy.  In this post I will go over some of the various virtual environment tricks that I seem to always forget if I haven’t worked with Python in awhile.

This post is meant to be mostly a reference for remembering commands and syntax and other helpful notes.  I’d also like to mention that these steps were all tested on OSX, I haven’t tried on Windows so don’t know if it is any different.

Working with virtual environments

There are a few pieces in order to get to get started.  First, the default version of Python that ships with OSX is 2.7, which is slowly moving towards extinction.  Unfortunately, it isn’t exactly obvious how to replace this version of Python on OSX.

Just doing a “brew install python” won’t actually point the system at the newly installed version.  In order to get Python 3.x working correctly, you need to update the system path and place Python3 there.

export PATH="/usr/local/opt/python/libexec/bin:$PATH"

You will want to put the above line into your bashrc or zshrc (or whatever shell profile you use) to get the brew installed Python onto your system path by default.

Another thing I discovered – in Python 3 there is a built in command for creating virtual environments, which alleviates the need to install the virtualenv package.

Here is the command in Python 3 the command to create a new virtual environment.

python -m venv test

Once the environment has been created, it is very similar to virtualenv.  To use the environment, just source it.

source test/bin/activate

To deactivate the environment just use the “deactivate” command, like you would in virutalenv.

The virtualenv package

If you like the old school way of doing virtual environments you can still use the virtualenv package for managing your environments.

Once you have the correct version of Python, you will want to install the virtualenv package on your system globally in order to work with virtual environments.

sudo pip install virtualenvwrapper

You can check to see if the package was installed correctly by running virtualenv -h.  There are a number of useful virtualenv commands listed below for working with the environments.

Make a new virtual env

mkvirtualenv test-env

Work on a virtual env

workon test-env

Stop working on a virtual env

(when env is actiave) deactive

List environments

lsvirtualenv

Remove a virtual environment

rmvirtualenv test-env

Create virtualenv with specific version of python

mkvirtualenv -p $(which pypy) test2-env

Look at where environments are stored

ls ~/.virtualenvs

I’ll leave it here for now.  The commands and tricks I (re)discovered were enough to get me back to being productive with virtual environments.  If you have some other tips or tricks feel free to let me know.  I will update this post if I find anything else that is noteworthy.

Read More

Kubernetes CLI Tricks

 

Kubernetes is complicated, as you’ve probably already discovered if you’ve used Kubernetes before.  Likewise, the Kubectl command line tool can pretty much do anything but can feel cumbersome, clunky and generally overwhelming for those that are new to the Kubernetes ecosystem.  In this post I want to take some time to describe a few of the CLI tools that I have discovered that help ease the pain of working with and managing Kubernetes from the command line.

There are many more tools out there and the list keeps growing, so I will probably revisit this post in the future to add more cool stuff as the community continues to grow and evolve.

Where to find projects?

As a side note, there are a few places to check for tools and projects.  The first is the CNCF Cloud Native Landscape.  This site aims to keep track of all the various different projects in the Cloud/Kubernetes world.  An entire post could be written about all of the features and and filters but at the highest level it is useful for exploring and discovering all the different evolving projects.  Make sure to check out the filtering capabilities.

The other project I have found to be extremely useful for finding different projects is the awesome-kubernetes repo on Github.  I found a number of tools mentioned in this post because of the awesome-kubernetes project.  There is some overlap between the Cloud Native Landscape and awesome-kubernetes but they mostly compliment each other very nicely.  For example, awesome-kubernetes has a lot more resources for working with Kubernetes and a lot of the smalller projects and utilities that haven’t made it into the Cloud Native Landscape.  Definitely check this project out if you’re looking to explore more of the Kubernetes ecosystem.

Kubectl tricks

These are various little tidbits that I have found to help boost my productivity from the CLI.

Tab completion – The first thing you will probably want to get working when starting.  There are just too many options to memorize and tab completion provides a nice way to look through all of the various commands when learning how Kubernetes works.  To install (on OS X) run the following command.

brew install bash-completion

In zsh, adding the completion is as simple as running source <(kubectl completion bash).  The same behavior can be accomplished in zsh using source <(kubectl completion zsh).

Aliases and shortcuts – One distinct flavor of Kubernetes is how cumbersome the CLI can be.  If you use Zsh and something like oh-my-zsh, there is a default set of aliases that work pretty well, which you can find here.  There are a many posts about aliases out there already so I won’t go into too much detail about them.  I will say though that aliasing k to kubectl is one of the best time savers I have found so far.  Just add the following snippet to your bash/zsh profile for maximum glory.

alias k=kubectl

kubectl –export – This is a nice hidden feature that basically allows users to switch Kubernetes from imperative (create) to declarative (apply).  The --export flag will basically take an existing object and strip out unwanted/unneeded metadata like statuses and timestamps and present a clear version of what’s running, which can then be exported to a file and applied to the cluster.  The biggest advantage of using declarative configs is the ability to mange and maintain them in git repos.

kubectl top – In newer versions, there is the top command, which gives a high level overview of CPU and memory utilization in the cluster.  Utilization can be filtered at the node level as well as the pod level to give a very quick and dirty view into potential bottlenecks in the cluster.  In older versions, Heapster needs to be installed for this functionaliity to work correctly, and in newer versions needs metrics-server to be running.

kubectl explain – This is a utility built in to Kubectl that basically provides a man page for what each Kubernetes resource does.  It is a simple way to explore Kubernetes without leaving the terminal

kubectx/kubens

This is an amazing little utility for quickly moving between Kubernetes contexts and namespaces.  Once you start working with multiple different Kubernetes clusters, you notice how cumbersome it is to switch between environments and namespaces.  Kubectx solves this problem by providing a quick and easy way to see what environments and namespaces a user is currently in and also quickly switch between them.  I haven’t had any issues with this tool and it is quickly becoming one of my favorites.

stern

Dealing with log output using Kubectl is a bit of a chore.  Stern (and similarly kail) offer a much nicer user experience when dealing with logs.  These tools allow users the ability to do things like show logs for multiple containers in pod,  use regex matching to tail logs for specific containers, give nice colored output for distinguishing between logs, filter logs by namespaces and a bunch of other nice features.

Obviously for a full setup, using an aggregated/centralized logging solution with something like Fluenctd or Logstash would be more ideal, but for examining logs in a pinch, these tools do a great job and are among my favorites.  As an added bonus, I don’t have to copy/paste log names any more.

yq

yq is a nice little command line tool for parsing yaml files, which works in a similar way to the venerable jq.  Parsing, reading, updating yaml can sometimes be tricky and this tool is a great and lightweight way to manipulate configurations.  This tool is especially useful for things like CI/CD where a tag or version might change that is nested deep inside yaml.

There is also the lesser known jsonpath option that allows you to interact with the json version of a Kubernetes object, baked into kubectl.  This feature is definitely less powerful than jq/yq but works well when you don’t want to overcomplicate things.  Below you can see we can use it to quickly grab the name of an object.

kubectl get pods -o=jsonpath='{.items[0].metadata.name}'

Working with yaml and json for configuration in general seems to be an emerging pattern for almost all of the new projects.  It is definitely worth learning a few tools like yq and jq to get better at parsing and manipulating data using these tools.

ksonnet / jsonnet

Similar to the above, ksonnet and jsonnet are basically templating tools for working with Kubernetes and json objects.  These two tools work nicely for managing Kubernetes manifests and make a great fit for automating deployments, etc. with CI/CD.

ksonnet and jsonnet are gaining popularity because of their ease of use and simplicity compared to a tool like Helm, which also does templating but needs a system level permission pod running in the Kubernetes cluster.  Jsonnet is all client side, which removes the added attack vector but still provides users with a lot of flexibility for creating and managing configs that a templating language provides.

More random Kubernetes tricks

Since 1.10, kubectl has the ability to port forward to resource name rather than just a pod.  So instead of looking up pods that are running and connecting to one all the time, you can just grab the service name or deployment and just port forward to it.

port-forward TYPE/NAME [LOCAL_PORT:]REMOTE_PORT
k port-forward deployment/mydeployment 5000:6000

New in 1.11, which will be dropping soonish, there is a top level command called api-resource, which allows users to view and interact with API objects.  This will be a nice troubleshooting tool to have if for example you are wanting to see what kinds of objects are in a namespace.  The following command will show you these objects.

k api-resources --verbs=list --namespace -o name | xargs -n 1 kubectl get -o name -n foo

Another handy trick is the ability to grab a base64 string and decode it on the fly.  This is useful when you are working with secrets and need to quickly look at what’s in the secret.  You can adapt the following command to accomplish this (make sure you have jq installed).

k get secret my-secret --namespace default -o json | jq -r '.data | .["secret-field"]' | base64 --decode

Just replace .["secret-field"] to use your own field.

UPDATE: I just recently discovered a simple command line tool for decoding base64 on the fly called Kubernetes Secret Decode (ksd for short).  This tool looks for base64 and renders it out for you automatically so you don’t have to worry about screwing around with jq and base64 to extract data out when you want to look at a secret.

k get secret my-secret --namespace default -o json | ksd

That command is much cleaner and easier to use.  This utility is a Go app and there are binaries for it on the releases page, just download it and put it in your path and you are good to go.

Conclusion

The Kubernetes ecosystem is a vast world, and it only continues to grow and evolve.  There are many more kubectl use cases and community to tools that I haven’t discovered yet.  Feel free to let me know any other kubectl tricks you know of, and I will update them here.

I would love to grow this list over time as I get more acquainted with Kubernetes and its different tools.

Read More

Templated Nginx configuration with Bash and Docker

Shoutout to @shakefu for his Nginx and Bash wizardry in figuring a lot of this stuff out.  I’d like to take credit for this, but he’s the one who got a lot of it working originally.

Sometimes it can be useful to template Nginx files to use environment variables to fine tune and adjust control for various aspects of Nginx.  A recent example of this idea that I recently worked on was a scenario where I setup an Nginx proxy with a very bare bones configuration.  As part of the project, I wanted a quick and easy way to update some of the major Nginx configurations like the port it uses to listen for traffic, the server name, upstream servers, etc.

It turns out that there is a quick and dirty way to template basic Nginx configurations using Bash, which ended up being really useful so I thought I would share it.  There are a few caveats to this method but it is definitely worth the effort if you have a simple setup or a setup that requires some changes periodically.  I stuck the configuration into a Dockerfile so that it can be easily be updated and ported around – by using the nginx:alpine image as the base image the total size all said and done is around 16MB.  If you’re not interested in the Docker bits, feel free to skip them.

The first part of using this method is to create a simple configuration file that will be used to substitute in some environment variables.  Here is a simple template that is useful for changing a few Nginx settings.  I called it nginx.tmpl, which will be important for how the template gets rendered later.

events {}

http {
  error_log stderr;
  access_log /dev/stdout;

  upstream upstream_servers {
    server ${UPSTREAM};
  }

  server {
    listen ${LISTEN_PORT};
    server_name ${SERVER_NAME};
    resolver ${RESOLVER};
    set ${ESC}upstream ${UPSTREAM};

    # Allow injecting extra configuration into the server block
    ${SERVER_EXTRA_CONF}

    location / {
       proxy_pass ${ESC}upstream;
    }
  }
}

The configuration is mostly straight forward.  We are basically just using this configuration file and inserting a few templated variables denoted by the ${VARIABLE} syntax, which are just environment variables that get inserted into the configuration when it gets bootstrapped.  There are a few “tricks” that you may need to use if your configuration starts to get more complicated.  The first is the use of the ${ESC} variable.  Nginx uses the ‘$’ for its variables, which also is used by the template.  The extra ${ESC} basically just gives us a way to escape that $ so that we can use Nginx variables as well as templated variables.

The other interesting thing that we discovered (props to shakefu for this magic) was that you can basically jam arbitrary server block level configurations into an environment variable.  We do this with the ${SERVER_EXTRA_CONF} in the above configuration and I will show an example of how to use that environment variable later.

Next, I created a simple Dockerfile that provides some default values for some of the various templated variables.  The Dockerfile aslso copies the templated configuration into the image, and does some Bash magic for rendering the template.

FROM nginx:alpine

ENV LISTEN_PORT=8080 \
  SERVER_NAME=_ \
  RESOLVER=8.8.8.8 \
  UPSTREAM=icanhazip.com:80 \
  UPSTREAM_PROTO=http \
  ESC='$'

COPY nginx.tmpl /etc/nginx/nginx.tmpl

CMD /bin/sh -c "envsubst < /etc/nginx/nginx.tmpl > /etc/nginx/nginx.conf && nginx -g 'daemon off;' || cat /etc/nginx/nginx.conf"

There are some things to note.  First, not all of the variables in the template need to be declared in the Dockerfile, which means that if the variable isn’t set it will be blank in the rendered template and just won’t do anything.  There are some variables that need defaults, so if you ever run across that scenario you can just add them to the Dockerfile and rebuild.

The other interesting thing is how the template gets rendered.  There is a tool built into the shell called envsubst that substitutes the values of environment variables into files.  In the Dockerfile, this tool gets executed as part of the default command, taking the template as the input and creating the final configuration.

/bin/sh -c "envsubst < /etc/nginx/nginx.tmpl > /etc/nginx/nginx.conf

Nginx gets started in a slightly silly way so that daemon mode can be disabled (we want Nginx running in the foreground) and if that fails, the rendered template gets read to help look for errors in the rendered configuration.

&& nginx -g 'daemon off;' || cat /etc/nginx/nginx.conf"

To quickly test the configuration, you can create a simple docker-compose.yml file with a few of the desired environment variables, like I have below.

version: '3'
services:
  nginx_proxy:
    build:
      context: .
      dockerfile: Dockerfile
    # Only test the configuration
    #command: /bin/sh -c "envsubst < /etc/nginx/nginx.tmpl > /etc/nginx/nginx.conf && cat /etc/nginx/nginx.conf"
    volumes:
      - "./nginx.tmpl:/etc/nginx/nginx.tmpl"
    ports:
      - 80:80
    environment:
    - SERVER_NAME=_
    - LISTEN_PORT=80
    - UPSTREAM=test1.com
    - UPSTREAM_PROTO=https
    # Override the resolver
    - RESOLVER=4.2.2.2
    # The following would add an escape if it isn't in the Dockerfile
    # - ESC=$$

Then you can bring up Nginx server.

docker-compose up

The configuration doesn’t get rendered until the container is run, so to test the configuration only, you could add in a command in the docker-compose file that renders the configuration and then another command that spits out the rendered configuration to make sure it looks right.

If you are interested in adding additional configuration you can use the ${SERVER_EXTRA_CONF} as eluded to above.  An example of this extra configuration can be assigned to the environment variable.  Below is an arbitrary snippet that allows for connections to do long polling to Nginx, which basically means that Nginx will try to hold the connection open for existing connections for longer.

error_page 420 = @longpoll;
if ($arg_wait = "true") { return 420; }
}
location @longpoll {
# Proxy requests to upstream
proxy_pass $upstream;
# Allow long lived connections
proxy_buffering off;
proxy_read_timeout 900s;
keepalive_timeout 160s;
keepalive_requests 100000;

The above snipped would be a perfectly valid environment variable as far as the container is concerned, it will just look a little bit weird to the eye.

nginx proxy environment variables

That’s all I’ve got for now.  This minimal templated Nginx configuration is handy for testing out simple web servers, especially for proxies and is also nice to port around using Docker.

Read More

Copy text to clipboard using Powershell

In day-to-day life as an administrator, it is very common to copy and paste text between files, editors, etc.  There are a few features built into Windows and Powershell that can help make the copy pasta process easier.

The “clip” utility is a tool that has been built into Windows for a while now (I think since Windows 7), which allows you to redirect command output to the Windows system clipboard.  Pairing the clip tool with the capabilities of Powershell piping and you have a very nice way of getting text out of the shell and into a text editor, or wherever else you may want.

The following commands are simple and quick examples of how to pipe output text of a Powershell command to the clipboard.  You can use this method to output text from a file or from another Powershell command.

cat <file> | clip
Get-NetAdapater | clip

Then just hit CTRL+V in notepad (or your favorite text tool) and you will have the output of the previously clipped command.  In the above example, cat is just an alias for Get-Content in Powershell, carried over from the Linux and Unix shells.  Use the Get-Alias command to find at all of the various shortcuts for Powershell commands.

I tend to grab text out of files quite often, so writing out text to the clipboard and pasting elsewhere has saved me countless hours over the years of clicking around in GUI’s with a few keystrokes.  Shortcuts like this are often missed or over looked by admins that are used to point and clicking, so make sure you try it out if you haven’t discovered this trick.

Read More

Powershell for Linux!

Microsoft has been making a lot of inroads in the Open Source and Linux communities lately.  Linux and Unix purists undoubtedly have been skeptical of this recent shift.  Canonical for example, has caught flak for partnering with Microsoft recently.  But the times are changing, so instead of resenting this progress, I chose to embrace it.  I’ll even admit that I actually like many of the Open Source contributions Microsoft has been making – including a flourishing Github account, as well as an increasingly rich, and cross platform platform set of tools that includes Visual Studio Code, Ubuntu/Bash for Windows, .NET Core and many others.

If you want to take the latest and greatest in Powershell v6 for a spin on a Linux system, I recommend using a Docker container if available.  Otherwise just spin up an Ubuntu (14.04+) VM and you should be ready.  I do not recommend trying out Powershell for any type of workload outside of experimentation, as it is still in alpha for Linux.  The beta v6 release (with Linux support) is around the corner but there is still a lot of ground that needs to be covered to get there.  Since Powershell is Open Source you can follow the progress on Github!

If you use the Docker method, just pull and run the container:

docker run -it --rm ubuntu:16.04 bash

Then add the Microsoft Ubuntu repo:

# apt-transport-https is needed for connecting to the MS repo
apt-get update && apt-get install curl apt-transport-https
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list | tee /etc/apt/sources.list.d/microsoft.list

Update and install Powershell:

sudo apt-get update
sudo apt-get install -y powershell

Finally, start up Powershell:

powershell

If it worked, you should see a message for Powershell and a new command prompt:

# powershell
PowerShell
Copyright (C) 2016 Microsoft Corporation. All rights reserved.

PS />

Congratulations, you now have Powershell running in Linux.  To take it for a spin, try a few commands out.

Write-Host "Hello Wordl!"

This should print out a hello world message.  The Linux release is still in alpha, so there will surely be some discrepancies between Linux and Windows based systems, but the majority of cmdlets should work the same way.  For example, I noticed in my testing that the terminal was very flaky.  Reverse search (ctrl+r) and the Get-History cmdlet worked well, but arrow key scrolling through history did not.

You can even run Powershell on OSX now if you choose to.  I haven’t tried it yet, but is an option for those that are curious.  Needless to say, I am looking for the

Read More