Microsoft has been making a lot of inroads in the Open Source and Linux communities lately. Linux and Unix purists undoubtedly have been skeptical of this recent shift. Canonical for example, has caught flak for partnering with Microsoft recently. But the times are changing, so instead of resenting this progress, I chose to embrace it. I’ll even admit that I actually like many of the Open Source contributions Microsoft has been making – including a flourishing Github account, as well as an increasingly rich, and cross platform platform set of tools that includes Visual Studio Code, Ubuntu/Bash for Windows, .NET Core and many others.
If you want to take the latest and greatest in Powershell v6 for a spin on a Linux system, I recommend using a Docker container if available. Otherwise just spin up an Ubuntu (14.04+) VM and you should be ready. I do not recommend trying out Powershell for any type of workload outside of experimentation, as it is still in alpha for Linux. The beta v6 release (with Linux support) is around the corner but there is still a lot of ground that needs to be covered to get there. Since Powershell is Open Source you can follow the progress on Github!
If you use the Docker method, just pull and run the container:
docker run -it --rm ubuntu:16.04 bash
Then add the Microsoft Ubuntu repo:
# apt-transport-https is needed for connecting to the MS repo
apt-get update && apt-get install curl apt-transport-https
If it worked, you should see a message for Powershell and a new command prompt:
# powershell
PowerShell
Copyright (C) 2016 Microsoft Corporation. All rights reserved.
PS />
Congratulations, you now have Powershell running in Linux. To take it for a spin, try a few commands out.
Write-Host "Hello Wordl!"
This should print out a hello world message. The Linux release is still in alpha, so there will surely be some discrepancies between Linux and Windows based systems, but the majority of cmdlets should work the same way. For example, I noticed in my testing that the terminal was very flaky. Reverse search (ctrl+r) and the Get-History cmdlet worked well, but arrow key scrolling through history did not.
You can even run Powershell on OSX now if you choose to. I haven’t tried it yet, but is an option for those that are curious. Needless to say, I am looking for the
We have all heard about DNS catastrophes. I just read about horror story on reddit the other day, where an Azure root DNS zone was accidentally deleted with no backup. I experienced a similar disaster a few years ago – a simple DNS change managed to knock out internal DNS for an entire domain, which contained hundreds of records. Reading the post hit close to home, uncovering some of my own past anxiety, so I began poking around for solutions. Immediately, I noticed that backing up DNS records is usually skipped over as part of the backup process. Folks just tend to never do it, for whatever reason.
I did discover, though, that backing up DNS is easy. So I decided to fix the problem.
I wrote a simple shell script that dumps out all Route53 zones for a given AWS account to a json file, and uploads the zones to an S3 bucket. The script is a handful lines, which is perfect because it doesn’t take much effort to potentially save your bacon.
If you don’t host DNS in AWS, the script can be modified to work for other DNS providers (assuming they have public API’s).
Here’s the script:
#!/usr/bin/env bash
set -e
# Dump route 53 zones to a text file and upload to S3.
BACKUP_DIR=/home/<user>/dns-backup
BACKUP_BUCKET=<bucket>
# Use full paths for cron
CLIPATH="/usr/local/bin"
# Dump all zones to a file and upload to s3
function backup_all_zones () {
local zones
# Enumerate all zones
zones=$($CLIPATH/aws route53 list-hosted-zones | jq -r '.HostedZones[].Id' | sed "s/\/hostedzone\///")
for zone in $zones; do
echo "Backing up zone $zone"
$CLIPATH/aws route53 list-resource-record-sets --hosted-zone-id $zone > $BACKUP_DIR/$zone.json
done
# Upload backups to s3
$CLIPATH/aws s3 cp $BACKUP_DIR s3://$BACKUP_BUCKET --recursive --sse
}
# Create backup directory if it doesn't exist
mkdir -p $BACKUP_DIR
# Backup up all the things
time backup_all_zones
Be sure to update the <user> and <bucket> in the script to match your own environment settings. Dumping the DNS records to json is nice because it allows for a more programmatic way of working with the data. This script can be run manually, but is much more useful if run automatically. Just add the script to a cronjob and schedule it to dump DNS periodically.
For this script to work, the aws cli and jq need to be installed. The installation is skipped in this post, but is trivial. Refer to the links for instructions.
The aws cli needs to be configured to use an API key with read access from Route53 and the ability to write to S3. Details are skipped for this step as well – be sure to consult the AWS documentation on setting up IAM permissions for help with setting up API keys. Another, simplified approach is to use a pre-existing key with admin credentials (not recommended).
One thing I have quickly discovered as I get acclimated to my new Windows machine is that by default the Windows Powershell CLI appends the executable file extension to the command that gets run, which is not the case on Linux or OSX. That got me wondering if it is possible to modify this default behavior and remove the extension. I’m going to ruin the surprise and let everybody know that it is definitely possible change this behavior, thanks to the flexibility of Powershell and friends. Now that the surprise is ruined, read on to find out how this solution works.
To check which file types Windows considers to be executable you can type $Env:PathExt.
The problem though, is that when you start typing in an extension that is part of this path, say python, and tab complete it, Windows will automatically append the file extension to the executable. Since I am more comfortable using a *nix style shell it is an annoyance having to deal with the file extensions.
Below I will show you a hack for hiding these from you Powershell prompt. It is actually much more work than I thought to add this behavior but with some help from some folks over at stackoverflow, we can add it. Basically, we need to overwrite the functionality of the default Powershell tab completion with our own, and then have that override get loaded into the Powershell prompt when it gets loaded, via a custom Profile.ps1 file.
To get this working, the first step is to look at what the default tab completion does.
(Get-Command 'TabExpansion2').ScriptBlock
This will spit out the code that handles the tab completion behavior. To get our custom behavior we need to override the original code with our own logic, which I have below (I wish I came up with this myself but alas). This is note the full code, just the custom logic. The full script is posted below.
The code looks a little bit intimidating but is basically just looking to see if the command is executable and on our system path, and if it is just strips out the extension.
So to get this all working, we need to create a file with the logic, and have Powershell read it at load time. Go ahead and paste the following code into a file like no_ext_tabs.ps1. I place this in the Powershell path (~/Documents/WindowsPowerShell), but you can put it anywhere.
To start using this tab completion override file right away, just source the file as below and it should start working right away.
. .\no_ext_tabs.ps1
If you want the extensions to be hidden every time you start a new Powershell session we just need to create a new Powershell profile (more reading on creating Powershell profiles here if you’re interested) and have it load our script. If you already have a custom profile you can skip this step.
New-Item -path $profile -type file -force
After you create the profile go ahead and edit it by adding the following configuration.
# Dot source not_ext_tabs to remove file extensions from executables in path
. C:\Users\jmreicha\Documents\WindowsPowerShell\no_ext_tabs.ps1
Close your shell and open it again and you should no longer see the file extensions.
There is one last little, unrelated tidbit that I discovered through this process but thought was pretty handy and worth sharing with other Powershell N00bs.
Powershell 3 and above provides some nice key bindings for jumping around the CLI, similar to a bash based shell if you are familiar or have a background using *nix systems.
You can check the full list of these key bindings by typing ctrl+alt+shift+? in your Powershell prompt (thanks Keith Hill for this trick).
If you have worked with Jenkins for any extended length of time you quickly realize that the Jenkins server configurations can become complicated. If the server ever breaks and you don’t have a good backup of all the configuration files, it can be extremely painful to recreate all of the jobs that you have configured. And most recently if you have started using the Jenkins workflow libraries, all of your custom scripts and coding will disappear if you don’t back it up.
Luckily, backing up your Jenkins job configurations is a fairly simple and straight forward process. Today I will cover one quick and dirty way to backup configs using a Jenkins job.
There are some AWS plugins that will backup your Jenkins configurations but I found that it was just as easy to write a little bit of bash to do the backup, especially since I wanted to backup to S3, which none of the plugins I looked at handle. In genereal, the plugins I looked at either felt a little bit too heavy for what I was trying to accomplish or didn’t offer the functionality I was looking for.
If you are still interested in using a plugin, here are a few to check out:
Keep reading if none of the above plugins look like a good fit.
The first step is to install the needed dependencies on your Jenkins server. For the backup method that I will be covering, the only tools that need to be installed are aws cli, tar and rsync. Tar and rsync should already be installed and to get the aws cli you can download and install it with pip, from the Jenkins server that has the configurations you want to back up.
pip install awscli
After the prerequisites have been installed, you will need to create your Jenkins job. Click New Item -> Freestyle and input a name for the new job.
Then you will need to configure the job.
The first step will be figuring out how often you want to run this backup. A simple strategy would be to backup once a day. The once per day strategy is illustrated below.
Note the ‘H’ above means to randomize when the job runs over the hour so that if other jobs were configured they would try to space out the load.
The next step is to backup the Jenkins files. The logic is all written in bash so if you are familiar it should be easy to follow along.
# Delete all files in the workspace
rm -rf *
# Create a directory for the job definitions
mkdir -p $BUILD_ID/jobs
# Copy global configuration files into the workspace
cp $JENKINS_HOME/*.xml $BUILD_ID/
# Copy keys and secrets into the workspace
cp $JENKINS_HOME/identity.key.enc $BUILD_ID/
cp $JENKINS_HOME/secret.key $BUILD_ID/
cp $JENKINS_HOME/secret.key.not-so-secret $BUILD_ID/
cp -r $JENKINS_HOME/secrets $BUILD_ID/
# Copy user configuration files into the workspace
cp -r $JENKINS_HOME/users $BUILD_ID/
# Copy custom Pipeline workflow libraries
cp -r $JENKINS_HOME/workflow-libs $BUILD_ID
# Copy job definitions into the workspace
rsync -am --include='config.xml' --include='*/' --prune-empty-dirs --exclude='*' $JENKINS_HOME/jobs/ $BUILD_ID/jobs/
# Create an archive from all copied files (since the S3 plugin cannot copy folders recursively)
tar czf jenkins-configuration.tar.gz $BUILD_ID/
# Remove the directory so only the tar.gz gets copied to S3
rm -rf $BUILD_ID
Note that I am not backing up the job history because the history isn’t important for my uses. If the history IS important, make sure to add a line to backup those locations. Likewise, feel free to modify and/or update anything else in the script if it suits your needs any better.
The last step is to copy the backup to another location. This is why we installed aws cli earlier. So here I am just uploading the tar file to an S3 bucket, which is versioned (look up how to configure bucket versioning if you’re not familiar).
export AWS_DEFAULT_REGION="xxx"
export AWS_ACCESS_KEY_ID="xxx"
export AWS_SECRET_ACCESS_KEY="xxx"
# Upload archive to S3
echo "Uploading archive to S3"
aws s3 cp jenkins-configuration.tar.gz s3://<bucket>/jenkins-backup/
# Remove tar.gz after it gets uploaded to S3
rm -rf jenkins-configuration.tar.gz
Replace the AWS_DEFAULT_REGION with the region where the bucket lives (typically us-east-1), make sure to update the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to use an account with access to write to AWS S3 (not covered here). The final thing to note, <bucket> should be replaced to use your own bucket.
The backup process itself is usually pretty fast unless the Jenkins server has a massive amount of jobs and configurations. Once you have configured the job, feel free to run it once to test if it works. If the job worked and returns as completed, go check your S3 bucket and make sure the tar.gz file was uploaded. If you are using versioning there should just be one file, and if you choose the “show versions” option you will see something similar to the following.
If everything went okay with your backup and upload to s3 you are done. Common issues configuring this backup method are choosing the correct AWS bucket, region and credentials. Also, double check where all of your Jenkins configurations live in case there aren’t in a standard location.
There has been a lot of work lately that has gone into bringing Docker containers to the Windows platform. Docker has been working closely with Microsoft to bring containers to Windows and just announced the availability of Docker on Windows at the latest ignite conference. So, in this post we will go from 0 to your first Windows container.
This post covers some details about how to get up and running via the Docker app and also manually with some basic Powershell commands. If you just want things to work as quickly as possible I would suggest the Docker app method, otherwise if you are interested in learning what is happening behind the scenes, you should try the Powershell method.
The prerequisites are basically Windows 10 Anniversary and its required components; which consist of the Docker app if you want to configure it through its GUI or the Windows container feature, and Hyper-V if you want to configure your environment manually.
Configure via Docker app
This is by far the easier of the two methods. This recent blog post has very good instructions and installation steps which I will step through in this post, adding a few pieces of info that helped me out when going through the installation and configuration process.
After you install the Win 10 Anniversary update, go grab the latest beta version of the Docker Engine, via the Docker for Windows project. NOTE: THIS METHOD WILL NOT WORK IF YOU DON’T USE BETA 26 OR LATER. To check, open your Docker app version by clicking on the tray icon and clicking “About Docker” and make sure it says -beta26 or higher.
After you go through the installation process, you should be able to run Docker containers. You should also now have access to other Docker tools, including docker-comopse and docker-machine. To test that things are working run the following command.
docker run hello-world
If the run command worked you are most of the way there. By default, the Docker engine will be configured to use the Linux based VM to drive its containers. If you run “docker version” you can see that your Docker server (daemon) is using Linux.
In order to get things working via Windows, select the option “Switch to Windows containers” in the Docker tray icon.
Now run “docker version” again and check what Server architecture is being used.
As you can see, your system should now be configured to use Windows containers. Now you can try pulling a Windows based container.
docker pull microsoft/nanoserver
If the pull worked, you are are all set. There’s a lot going on behind the scenes that the Docker app abstracts but if you want to try enabling Windows support yourself manually, see the instructions below.
Configure with Powershell
If you want to try out Windows native containers without the latest Docker beta check out this guide. The basic steps are to:
Enable the Windows container feature
Enable the Hyper-V feature
Install Docker client and server
To enable the Windows container feature from the CLI, run the following command from and elevated (admin) Powershell prompt.
After you enable Hyper-V you will need to reboot your machine. From the command line the command is “Restart-Computer -Force”.
After the reboot, you will need to either install the Docker engine manually, or just use the Docker app. Since I have already demonstrated the Docker app method above, here we will just install the Docker engine. It’s also worth mentioning that if you are using the Docker app method or have used it previously, these commands have been run already so the features should be turned on already, simplifying the process.
Then you can try pulling your docker image, as above.
docker pull microsoft/nanoserver
There are some drawback to this method, especially in a dev based environment.
The Powershell method involves a lot of manual effort, especially on a local machine where you just want to test things out quickly. Obviously the install/config process could be scripted out but that solution isn’t idea for most users. Another drawback is that you have to manually manage which version of Docker is installed, this method does not update the version automatically. Using a managed app also installs and manages versions of the other Docker productivity tools, like compose and machine, that make interacting with and managing containers a lot easier.
I can see the Powershell installation method being leveraged in a configuration management scenario or where a specific version of Docker should be deployed on a server. Servers typically don’t need the other tools and should be pinned at specific version numbers to avoid instability issues and to make sure there aren’t other programs that could potentially cause issues.
While the Docker app is still in beta and the Windows container management component of it is still new, I would still definitely recommend it as a solution. The app is still in beta but I haven’t had any issues with it yet, outside of a few edge cases and it just makes the Docker experience so much smoother, especially for devs and other folks that are new to Docker who don’t want to muck around the system.