Setting up Git in PowerShell

It seems like everybody is using git these days.  And for most, not everybody is stuck using Windows in their day to day workflow.  Unfortunately, I am.  So that means it is much more painful to get up and running with a lot of the coolest and best open source projects that are offered by members of github and other online code repositories being shared via git.  However, there is hope and it is possible for Windows users to join the git party.  So in this post, I would like to describe just how to do that.  And it should only take a few minutes if done correctly.  I will mention beforehand that there are a few steps that need to be completed in order for this technique to work successfully that typically are taken care of in a Linux or OSX environment.

The goal of this post is to work through these steps as best I can to get users up and running as quickly as possible and as easily as possible, reducing the amount of confusion and fumbling around with settings.  This post is designed for beginners that are just getting their feet wet with git but hopefully others can use it as a resource if they are coming from a different environment and are confused by the Windows way of doing things.

First step – Download and install the git port for Windows.

This is pretty straight forward.  Download and run the executable to install git for Windows.  If you just want to get up and running or are lazy, you can leave all of the defaults when you run through the installation wizard.

Second step – Add the git binaries to your system path variable.

This is the most important step, because out of the box git won’t work in your ordinary PowerShell command prompt, it needs to be opened separately.  So to fix this and add all the necessary binaries open up your environmental variables (in Windows 8).

Computer -> Properties -> Advanced -> Environmental Variables

environmental variables

and add the following value to the PATH variable.

C:\Program Files\Git\bin

Here is what this should look like in Windows.

path variable

Third step (optional) – Download and install posh-git for better PowerShell and git integration.

I have highlighted part of this process before in an older post but will go through the steps again because it is pretty straight forward.  To be able to get posh-git you need to have a sort of PowerShell package management tool called PsGet (instructions here).  To get this tool run the following command from your PowerShell command prompt.

(new-object Net.WebClient).DownloadString("http://psget.net/GetPsGet.ps1") | iex

Once the command has completed you should be able to simply run this install command and be finished.

install-module posh-git

That should be it.  With these simple steps you should be able to utilize git from the command line like you are accustomed to on other operating systems.  As I said, there is a tad more leg work but you can really utilize the flexibility of PowerShell to get things working.  I hope it helps, and as always let me know if you have any tips or questions.

About the Author: Josh Reichardt

Josh is the creator of this blog, a system administrator and a contributor to other technology communities such as /r/sysadmin and Ops School. You can also find him on Twitter and Facebook.

Quickly Find Exchange Database Usage

Here is a Powershell script you can use to quickly determine the total amount of space taken up by all of your Exchange database files (edb files) on an Exchange server.  I’d like to note that this may not necessarily be a 100% accurate representation but is a great way to get a ballpark number without having to add the numbers up yourself, manually.

$dbs = Get-MailboxDatabase -Status

foreach($db in $dbs) {

$edbsize = $db.DatabaseSize.Tobytes()
$totalsize += $edbsize

}

Write-Host $totalsize

I noticed that I had no way to calculate the total amount of space being used by my Exchange databases the other day.  And even after scouring through teh Googles I was unable to find what I was looking for quickly so I wrote this script up quick to fix that problem.  Just copy the previous bit of code into a ps1 file with notepad and execute the script from your EMS.  It is a super simple way to iterate through all the databases, save their sizes to a variable and then spit that variable out when it is complete.

About the Author: Josh Reichardt

Josh is the creator of this blog, a system administrator and a contributor to other technology communities such as /r/sysadmin and Ops School. You can also find him on Twitter and Facebook.

Disable Offline Files in Windows 7

Offline files in Windows are a set of features that essentially give users the ability to work with files off of or outside of the network.  So for example if a user had a laptop that had a mapped drive or network share and were to take their computer outside of the network, the features offered by offline files would allow this user to continue working with these files.  I will not cover the details of how all of this magic works in this post, I just want to show people the best way I found to disable this feature with the least amount of problems.  If you want to go straight from the source, here is the original article the gave me about 95% of the information necessary for accomplishing this task.

The remainder of this post will detail my findings and experience from the link above.  This feature (offline files) is enabled by default in Windows 7.  Here is a good overview of the benefits of offline files.  However, for me personally as an admin, this feature so far has caused much confusion in the work environment for users that are not accustomed to having such a feature in our move towards Windows 7.

These settings can of course be controlled on a per user basis by changing the settings and configuration of the “Sync Center” tool in Windows.  But when you are involved in a larger environment and need this sort of process automated for many users, Group Policy becomes the most effective way to handle this problem.  There are a few steps to get offline folders disabled correctly so I thought I would share all the pieces in case somebody runs across a similar need as I did.  The first step to disable the offline file features is to adjust the following settings in Group Policy:

Computer -> Policies -> Admin Templates -> Network -> Offline files

  • Allow or Disallow use of the Offline Files feature: Disabled
  • Prohibit user configuration of Offline Files: Enabled
  • Sync all offline files when logging on: Disabled
  • Sync all offline files before logging off: Disabled
  • Sync offline files before suspend: Disabled
  • Remove “Make available offline” command: Enabled
  • Prevent use of Offline Files folder: Enabled

Next, we need to tell Group Policy to shut off the offline file service and disable it on all Windows machines that have the service installed (Windows XP, 7, 8 machines).  To do this you will need to modify your Group Policy settings on a machine that has the service installed it already, through RSAT.  This is an important step, you will not be able to find this service if you are adjusting the GP settings from a server.  This service is located in the following location:

Computer Configuration -> Windows Settings -> Security Settings -> System Services

The specific service we are looking for is the “cscservice“, which corresponds to the service labeled “Offline Files” in the Windows services list.

The last step to get this policy working correctly is to add in a registry key that will fix machines that have already been used to cache certain network resources.  Essentially adding this registry key tell the machine to blow up its database of offline files and tells the machine to remove the cached files as well.  To configure this settings we need to add in a custom reg entry:

Computer Configuration -> Preferences -> Windows Settings -> Registry

Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\CSC\Parameters
Value name: FormatDatabase
Value type: DWORD
Value data: 1

Here is a good article with instructions on how change the registry settings by hand and a screenshot of my own GP environment with how the settings should look via the GP Management Console.

Offline files registry entry

That should be all the necessary changes that need to be made.  If I missed anything let me know, hopefully this will save people time in the future.

About the Author: Josh Reichardt

Josh is the creator of this blog, a system administrator and a contributor to other technology communities such as /r/sysadmin and Ops School. You can also find him on Twitter and Facebook.

The Cisco Live! 2013 Experience

Cisco Live! 2013

I recently returned from my trip to Orlando and the Cisco Live! 2013 conference this year, so I thought I would take some time to reflect and go over the experience and report on some key highlights I was able to take away from this years conference.  This year was my first Cisco Live! event and I have to say I was really impressed with the experience as a whole.  There were maybe a few gripes here and there but overall the event in its entirety was pretty awesome.  So in this post I just want to discuss some of the details of the event further.  I don’t have a lot else to report on so I will go ahead and get get going, and begin by going over some of the Cisco specific trends that I noticed.  Of course this is all a subjective experience and some may disagree but here is what I felt to be generally true throughout much of the event.

 Technological takeaways (in my personal experience).

  • Cisco is hedging a huge bet that SDN is going to take off in the immediate future.
  • Mobility is going to continue to explode and increase the diversity of networks so we need to prepare and build our network infrastructures to handle the drive towards mobility.
  • Cisco really drove home the concept of the future connectedness of devices by pushing their idea of “the internet of everything”.  This is the concept that technological experiences will converge and be tightly coupled.  One example that was presented was a seamless experience at a hotel.  The real chunk of info to take away is that as technology continues to evolve we will need to adapt networks to suit these needs.

In general, I felt these were the main drivers and ideas for a lot of what will be happening in the future of networking, at Cisco and abroad.  Obviously Cisco was there to push their products so I will go ahead and cover a few of the key ideas and products that Cisco believes will help drive these future changes

  • The maturation of the ISE platform.  This will be the convergence of a number of disparate technologies Cisco currently offers into a unified identity and access platform, this will correlate with the increase of mobile and the BYOD movement.
  • The SDN components.  Essentially this is Cisco One line of products focused on the evolving SDN space.  This includes the OnePK toolkit for OpenFlow development, the One controller for OpenFlow traffic control.  There were more SDN components, I just can’t think of them right now.
  • New product introductions and evolutions.  The Nexus 7710 and 7718 for scaling out the data center, the 6800 series to augment the capabilities of the 6500 series, improving performance, scale and speed.

There were many more announcements and products covered but to me, these aforementioned products were the main focus and effort.  If I missed anything you thought was important let me know.  Now that I have the big announcements covered I’d like to cover some of the other key highlights from the event.

The Good

  • Organization.  Everything from hotel shuttles, information kiosks, to a very helpful event staff.  I must say the event planners and organizers really thought things through (for the most part).
  • Deep dive sessions.  The presenters were often the people who helped create the RFC’s or were responsible for writing the code.  You can’t get much closer to the source than that.  The few presenters I spoke with were all super nice people as well.
  • Free certification tests.  This ranged all the way from CCNA all the way up to CCIE tests.
  • Universal Studios.  Free food, amazing rides, it was just a great all around experience.  Plus free booze, so you know, that was pretty awesome.
  • Journey.  Do I even need to say more?
  • World of Solutions.  This was their product and demo floor.  Other than the fact that I sold my sole, I learned a lot here and was introduced to a ton of new products I otherwise would not have known about, plus I got about 20 t-shirts.  Also free booze here as well.
  • Keynote speech by sir Richard Branson.  It took on the format of a question/answer type interview, it was really cool to hear Branson talk and answer questions so candidly.  No free booze but I gained some respect for him.

The Bad

  • The mobile apps.  It almost seemed like this was an afterthought because much of the functionality either didn’t work at all or was crippled.  It was a good idea but the execution was lacking, I’m sure this will get fixed next year.
  • The website was down the first day, due to a load balancer that broke.  This caused a lot of confusion and problems, but I was able to print my schedule out at a kiosk so it wasn’t a huge issue for me.
  • The shuttle to and from Universal was a disaster for me.  Many others didn’t experience this issue but it took about 45 minutes to get to the theme park and at least an hour to get back to my hotel.  I can’t really complain looking back but it was frustrating at the time.

I would definitely recommend that anybody responsible for supporting their network at any capacity to attend this event at least one time.  One nice thing about this event is that it doesn’t matter what skill level you are at, all ranges were covered and represented.  I am lucky that I was able to attend this year and am very thankful.  This was a great experience, it was incredibly eye opening and the positive effect it had on my own thought process can’t be overstated.  I think that it will benefit me throughout my career and hopefully can be used to create opportunities for myself in the future.

About the Author: Josh Reichardt

Josh is the creator of this blog, a system administrator and a contributor to other technology communities such as /r/sysadmin and Ops School. You can also find him on Twitter and Facebook.

Creating new a instance-store AMI for Amazon AWS EC2

This is a HOWTO build your own instance-store backed AMI image which is suitable for creating a Paid AMI. The motivation for doing this HOWTO is simple: I tried it, and it has a lot of little gotchas, so I want some notes for myself. This HOWTO assumes you’re familiar with launching EC2 instances, logging into them, and doing basic command line tasks.

Choosing a starting AMI

There’s a whole ton of AMIs available for use with EC2, but not quite so many which are backed by instance-store storage. Why’s that? Well, EBS is a lot more flexible and scalable. The instance-store images have a fairly limited size for their root partition. For my use case, this isn’t particularly important, and for many use cases, it’s trivial to mount some EBS volumes for persistant storage.

Amazon provides some of their Amazon Linux AMIs which are backed by EBS or instance-store, but they’re based on CentOS, and frankly, I’ve had so much troubles with CentOS in the past, that I just prefer my old standby: Ubuntu. Unfortunately, I had a lot of trouble finding a vanilla Ubuntu 12.04 LTS instance-store backed image through the AWS Console. They do exist, however, and they’re provided by Canonical. Thanks  guys!

Here’s a list of all the 12.04 Precise official AMIs:
http://cloud-images.ubuntu.com/releases/precise/release/

Conveniently, there’s a Launch button right there for each AMI instance. Couldn’t be easier!

Installing the EC2 Tools

Once you’ve got an instance launched and you’re logged in and sudo‘d to root, you’ll need to install the EC2 API and AMI tools provided by Amazon. The first step is, of course, to download them. Beware! The tools available through the Ubuntu multiverse repositories are unfortunately out of date.

The latest EC2 API tools can be found here:
http://aws.amazon.com/developertools/351

The latest EC2 AMI tools can be found here:
http://aws.amazon.com/developertools/368

I like to copy the download link and use wget to download them rather than scp‘ing them from my client machine.

sudo su
mkdir -p /tmp/ec2-tools
cd /tmp/ec2-tools
wget -O ec2-api-tools.zip 'http://www.amazon.com/gp/redirect.html/ref=aws_rc_ec2tools?location=http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip&token=A80325AA4DAB186C80828ED5138633E3F49160D9'
wget -O ec2-ami-tools.zip 'http://s3.amazonaws.com/ec2-downloads/ec2-ami-tools.zip'

Before we can install the EC2 tools, we need to install a few packages that our vanilla Ubuntu is lacking, namely zip and Java.

apt-get install zip
apt-get install openjdk-6-jre-lib
apt-get install ruby

Once we have those installed, we need to unzip our packages and install them to the /usr/local directory.

unzip "*.zip"
find . \( -name bin -o -name lib -o -name etc \) | \
    xargs -I path cp -r path /usr/local

Lastly we have to set the EC2_HOME and the JAVA_HOME environment variables for the EC2 tools to work properly. I like to do this by editing /etc/bash.bashrc so anyone on the machine can use the tools without issue.

echo -e "\nexport EC2_HOME=/usr/local\nexport JAVA_HOME=/usr\n" >> /etc/bash.bashrc

Once we log out and back in, those variables will be set, and the EC2 tools will be working.

# exit
$ sudo su
# ec2-version
1.6.7.4 2013-02-01

Customizing Your AMI

At this point, your machine should be all set for you to do whatever customization you need to do. Install libraries, configure boot scripts, create users, get your applications set up, anything at all. Once you’ve got a nice, stable (rebootable) machine going, then you can image it.

Bundling, Uploading and Registering your AMI

This is actually pretty easy, but I’ll still go through it. The Amazon documentation is fairly clear, and I recommend following along with that as well, as it explains all the options to each command.

Here’s the official Amazon documentation:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-snapshot-s3-linux.html

  1. Create an S3 bucket. This is where you’ll upload your AMI images. If you already have a bucket, you can use that.
  2. Download your AWS security certificates and copy your API keys. They can be found here: https://portal.aws.amazon.com/gp/aws/securityCredentials
  3. Copy your credentials to the instance you’re going to image. First, create a directory to store them in on your instance:
    mkdir -p /tmp/cert
    chmod 777 /tmp/cert
  4. Then copy them from the place you downloaded them on your client machine, to your instance:
    scp -i <keypair_name> pk-*.pem cert-*.pem ubuntu@<host_name>:/tmp/cert
  5. Bundle your instance image. The actual image bundle and manifest will end up in /tmp.
    cd /tmp/cert
    ec2-bundle-vol -k <private_keyfile> -c <certificate_file> \
        -u <user_id> -e <cert_location>
    cd /tmp
  6. Upload your bundled image. Note that <your-s3-bucket> should include a path that is unique to this image, such as my-bucket/ami/ubuntu/my-ami-1, otherwise things will get very messy for you, because an image consists of an image.manifest.xml file and many chunks which compose the image itself, which are generically named by default when you use this tool.
    ec2-upload-bundle -b <your-s3-bucket> -m <manifest_path> \
        -a <access_key> -s <secret_key>
  7. Register your new AMI.
    ec2-register <your-s3-bucket>/<path>/image.manifest.xml -n <image_name> \
        -O <your_access_key> -W <your_secret_key>

That’s it! You should be all set with a new AMI, which should also show up in the AWS Console.

About the Author: Jake Alheid

Jake is a Python evangelist and is a developer at about.me in San Francisco. He is also the creator of pyconfig and a code contributor on github.