Disable Offline Files in Windows 7

Offline files in Windows are a set of features that essentially give users the ability to work with files off of or outside of the network.  So for example if a user had a laptop that had a mapped drive or network share and were to take their computer outside of the network, the features offered by offline files would allow this user to continue working with these files.  I will not cover the details of how all of this magic works in this post, I just want to show people the best way I found to disable this feature with the least amount of problems.  If you want to go straight from the source, here is the original article the gave me about 95% of the information necessary for accomplishing this task.

The remainder of this post will detail my findings and experience from the link above.  This feature (offline files) is enabled by default in Windows 7.  Here is a good overview of the benefits of offline files.  However, for me personally as an admin, this feature so far has caused much confusion in the work environment for users that are not accustomed to having such a feature in our move towards Windows 7.

These settings can of course be controlled on a per user basis by changing the settings and configuration of the “Sync Center” tool in Windows.  But when you are involved in a larger environment and need this sort of process automated for many users, Group Policy becomes the most effective way to handle this problem.  There are a few steps to get offline folders disabled correctly so I thought I would share all the pieces in case somebody runs across a similar need as I did.  The first step to disable the offline file features is to adjust the following settings in Group Policy:

Computer -> Policies -> Admin Templates -> Network -> Offline files

  • Allow or Disallow use of the Offline Files feature: Disabled
  • Prohibit user configuration of Offline Files: Enabled
  • Sync all offline files when logging on: Disabled
  • Sync all offline files before logging off: Disabled
  • Sync offline files before suspend: Disabled
  • Remove “Make available offline” command: Enabled
  • Prevent use of Offline Files folder: Enabled

Next, we need to tell Group Policy to shut off the offline file service and disable it on all Windows machines that have the service installed (Windows XP, 7, 8 machines).  To do this you will need to modify your Group Policy settings on a machine that has the service installed it already, through RSAT.  This is an important step, you will not be able to find this service if you are adjusting the GP settings from a server.  This service is located in the following location:

Computer Configuration -> Windows Settings -> Security Settings -> System Services

The specific service we are looking for is the “cscservice“, which corresponds to the service labeled “Offline Files” in the Windows services list.

The last step to get this policy working correctly is to add in a registry key that will fix machines that have already been used to cache certain network resources.  Essentially adding this registry key tell the machine to blow up its database of offline files and tells the machine to remove the cached files as well.  To configure this settings we need to add in a custom reg entry:

Computer Configuration -> Preferences -> Windows Settings -> Registry

Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\CSC\Parameters
Value name: FormatDatabase
Value type: DWORD
Value data: 1

Here is a good article with instructions on how change the registry settings by hand and a screenshot of my own GP environment with how the settings should look via the GP Management Console.

Offline files registry entry

That should be all the necessary changes that need to be made.  If I missed anything let me know, hopefully this will save people time in the future.

Read More

Cisco Live! 2013

The Cisco Live! 2013 Experience

I recently returned from my trip to Orlando and the Cisco Live! 2013 conference this year, so I thought I would take some time to reflect and go over the experience and report on some key highlights I was able to take away from this years conference.  This year was my first Cisco Live! event and I have to say I was really impressed with the experience as a whole.  There were maybe a few gripes here and there but overall the event in its entirety was pretty awesome.  So in this post I just want to discuss some of the details of the event further.  I don’t have a lot else to report on so I will go ahead and get get going, and begin by going over some of the Cisco specific trends that I noticed.  Of course this is all a subjective experience and some may disagree but here is what I felt to be generally true throughout much of the event.

 Technological takeaways (in my personal experience).

  • Cisco is hedging a huge bet that SDN is going to take off in the immediate future.
  • Mobility is going to continue to explode and increase the diversity of networks so we need to prepare and build our network infrastructures to handle the drive towards mobility.
  • Cisco really drove home the concept of the future connectedness of devices by pushing their idea of “the internet of everything”.  This is the concept that technological experiences will converge and be tightly coupled.  One example that was presented was a seamless experience at a hotel.  The real chunk of info to take away is that as technology continues to evolve we will need to adapt networks to suit these needs.

In general, I felt these were the main drivers and ideas for a lot of what will be happening in the future of networking, at Cisco and abroad.  Obviously Cisco was there to push their products so I will go ahead and cover a few of the key ideas and products that Cisco believes will help drive these future changes

  • The maturation of the ISE platform.  This will be the convergence of a number of disparate technologies Cisco currently offers into a unified identity and access platform, this will correlate with the increase of mobile and the BYOD movement.
  • The SDN components.  Essentially this is Cisco One line of products focused on the evolving SDN space.  This includes the OnePK toolkit for OpenFlow development, the One controller for OpenFlow traffic control.  There were more SDN components, I just can’t think of them right now.
  • New product introductions and evolutions.  The Nexus 7710 and 7718 for scaling out the data center, the 6800 series to augment the capabilities of the 6500 series, improving performance, scale and speed.

There were many more announcements and products covered but to me, these aforementioned products were the main focus and effort.  If I missed anything you thought was important let me know.  Now that I have the big announcements covered I’d like to cover some of the other key highlights from the event.

The Good

  • Organization.  Everything from hotel shuttles, information kiosks, to a very helpful event staff.  I must say the event planners and organizers really thought things through (for the most part).
  • Deep dive sessions.  The presenters were often the people who helped create the RFC’s or were responsible for writing the code.  You can’t get much closer to the source than that.  The few presenters I spoke with were all super nice people as well.
  • Free certification tests.  This ranged all the way from CCNA all the way up to CCIE tests.
  • Universal Studios.  Free food, amazing rides, it was just a great all around experience.  Plus free booze, so you know, that was pretty awesome.
  • Journey.  Do I even need to say more?
  • World of Solutions.  This was their product and demo floor.  Other than the fact that I sold my sole, I learned a lot here and was introduced to a ton of new products I otherwise would not have known about, plus I got about 20 t-shirts.  Also free booze here as well.
  • Keynote speech by sir Richard Branson.  It took on the format of a question/answer type interview, it was really cool to hear Branson talk and answer questions so candidly.  No free booze but I gained some respect for him.

The Bad

  • The mobile apps.  It almost seemed like this was an afterthought because much of the functionality either didn’t work at all or was crippled.  It was a good idea but the execution was lacking, I’m sure this will get fixed next year.
  • The website was down the first day, due to a load balancer that broke.  This caused a lot of confusion and problems, but I was able to print my schedule out at a kiosk so it wasn’t a huge issue for me.
  • The shuttle to and from Universal was a disaster for me.  Many others didn’t experience this issue but it took about 45 minutes to get to the theme park and at least an hour to get back to my hotel.  I can’t really complain looking back but it was frustrating at the time.

I would definitely recommend that anybody responsible for supporting their network at any capacity to attend this event at least one time.  One nice thing about this event is that it doesn’t matter what skill level you are at, all ranges were covered and represented.  I am lucky that I was able to attend this year and am very thankful.  This was a great experience, it was incredibly eye opening and the positive effect it had on my own thought process can’t be overstated.  I think that it will benefit me throughout my career and hopefully can be used to create opportunities for myself in the future.

Read More

Creating new a instance-store AMI for Amazon AWS EC2

This is a HOWTO build your own instance-store backed AMI image which is suitable for creating a Paid AMI. The motivation for doing this HOWTO is simple: I tried it, and it has a lot of little gotchas, so I want some notes for myself. This HOWTO assumes you’re familiar with launching EC2 instances, logging into them, and doing basic command line tasks.

Choosing a starting AMI

There’s a whole ton of AMIs available for use with EC2, but not quite so many which are backed by instance-store storage. Why’s that? Well, EBS is a lot more flexible and scalable. The instance-store images have a fairly limited size for their root partition. For my use case, this isn’t particularly important, and for many use cases, it’s trivial to mount some EBS volumes for persistant storage.

Amazon provides some of their Amazon Linux AMIs which are backed by EBS or instance-store, but they’re based on CentOS, and frankly, I’ve had so much troubles with CentOS in the past, that I just prefer my old standby: Ubuntu. Unfortunately, I had a lot of trouble finding a vanilla Ubuntu 12.04 LTS instance-store backed image through the AWS Console. They do exist, however, and they’re provided by Canonical. Thanks  guys!

Here’s a list of all the 12.04 Precise official AMIs:
http://cloud-images.ubuntu.com/releases/precise/release/

Conveniently, there’s a Launch button right there for each AMI instance. Couldn’t be easier!

Installing the EC2 Tools

Once you’ve got an instance launched and you’re logged in and sudo‘d to root, you’ll need to install the EC2 API and AMI tools provided by Amazon. The first step is, of course, to download them. Beware! The tools available through the Ubuntu multiverse repositories are unfortunately out of date.

The latest EC2 API tools can be found here:
http://aws.amazon.com/developertools/351

The latest EC2 AMI tools can be found here:
http://aws.amazon.com/developertools/368

I like to copy the download link and use wget to download them rather than scp‘ing them from my client machine.

sudo su
mkdir -p /tmp/ec2-tools
cd /tmp/ec2-tools
wget -O ec2-api-tools.zip 'http://www.amazon.com/gp/redirect.html/ref=aws_rc_ec2tools?location=http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip&token=A80325AA4DAB186C80828ED5138633E3F49160D9'
wget -O ec2-ami-tools.zip 'http://s3.amazonaws.com/ec2-downloads/ec2-ami-tools.zip'

Before we can install the EC2 tools, we need to install a few packages that our vanilla Ubuntu is lacking, namely zip and Java.

apt-get install zip
apt-get install openjdk-6-jre-lib
apt-get install ruby

Once we have those installed, we need to unzip our packages and install them to the /usr/local directory.

unzip "*.zip"
find . \( -name bin -o -name lib -o -name etc \) | \
    xargs -I path cp -r path /usr/local

Lastly we have to set the EC2_HOME and the JAVA_HOME environment variables for the EC2 tools to work properly. I like to do this by editing /etc/bash.bashrc so anyone on the machine can use the tools without issue.

echo -e "\nexport EC2_HOME=/usr/local\nexport JAVA_HOME=/usr\n" >> /etc/bash.bashrc

Once we log out and back in, those variables will be set, and the EC2 tools will be working.

# exit
$ sudo su
# ec2-version
1.6.7.4 2013-02-01

Customizing Your AMI

At this point, your machine should be all set for you to do whatever customization you need to do. Install libraries, configure boot scripts, create users, get your applications set up, anything at all. Once you’ve got a nice, stable (rebootable) machine going, then you can image it.

Bundling, Uploading and Registering your AMI

This is actually pretty easy, but I’ll still go through it. The Amazon documentation is fairly clear, and I recommend following along with that as well, as it explains all the options to each command.

Here’s the official Amazon documentation:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-snapshot-s3-linux.html

  1. Create an S3 bucket. This is where you’ll upload your AMI images. If you already have a bucket, you can use that.
  2. Download your AWS security certificates and copy your API keys. They can be found here: https://portal.aws.amazon.com/gp/aws/securityCredentials
  3. Copy your credentials to the instance you’re going to image. First, create a directory to store them in on your instance:
    mkdir -p /tmp/cert
    chmod 777 /tmp/cert
  4. Then copy them from the place you downloaded them on your client machine, to your instance:
    scp -i <keypair_name> pk-*.pem cert-*.pem ubuntu@<host_name>:/tmp/cert
  5. Bundle your instance image. The actual image bundle and manifest will end up in /tmp.
    cd /tmp/cert
    ec2-bundle-vol -k <private_keyfile> -c <certificate_file> \
        -u <user_id> -e <cert_location>
    cd /tmp
  6. Upload your bundled image. Note that <your-s3-bucket> should include a path that is unique to this image, such as my-bucket/ami/ubuntu/my-ami-1, otherwise things will get very messy for you, because an image consists of an image.manifest.xml file and many chunks which compose the image itself, which are generically named by default when you use this tool.
    ec2-upload-bundle -b <your-s3-bucket> -m <manifest_path> \
        -a <access_key> -s <secret_key>
  7. Register your new AMI.
    ec2-register <your-s3-bucket>/<path>/image.manifest.xml -n <image_name> \
        -O <your_access_key> -W <your_secret_key>

That’s it! You should be all set with a new AMI, which should also show up in the AWS Console.

Read More

Nexus 1000v in a Hyper-V 2012 Environment (Part 1)

In the next few posts I will be going over some of the basics on how to get the Nexus 1000v setup and working in a Hyper-V environment.  I must warn readers ahead of time, this product was just released (as of a week or two ago) and the Cisco documentation is seriously lacking.  What documentation that does exist is thoroughly confusing so it may take some time to work through all of the issues.  Just as much if not more irritating, the Hyper-V way of doing things is just as confusing.  Taking on a project like this will surely improve your skills and abilities with virtualization, especially network virtualization.  I must admit, this stuff can get very confusing at first so it is important to realize that you might not understand everything at first, just be patient, it will eventually start making more sense.

First I need to lay some ground work.  I think it’s important not only in this example but a good habit in general to spec out a project and figure out all of the requirements in order to make sure you have everything lined up that you might need before tackling a project.  A few important considerations when working with the 1000v are to make sure the networking and NIC’s on the Hyper-V hosts are set correctly, Virtual Machine Manager (SCVMM) is installed and configured, the network is configured (LACP port channels, trunk ports, correct VLAN assignment, etc) and that configuring all of these pieces won’t cause any downtime or other issues with your production network.  Ideally, all of this would be thought of and set up ahead of time.  Luckily I have a test environment as well as SCVMM in my test environment to test this with and do not have to worry about any real world down time or production issues.

One of the most important things to get established is getting the underlying Hyper-V network stack configured properly.  I try to mimic a production type environment as much as possible so this configuration is a typical design you may see in the real world.  So let’s lay out the structure of the design.

  • Management VLAN(s)
  • DMZ VLAN(s)
  • Inside VLAN(s)
  • Live Migration VLAN(s)

It is common to break these out through different physical connections, so as an example you might see 4 different NIC’s on the Hyper-V host connecting to a switch that has 4 different VLAN’s configured.  If you want redundancy you can add NIC teaming into this scenario (which is native in Server 2012 now, which is nice).  I have limited resources so I am using a single NIC for management, DMZ and live migration traffic, and teaming the inside connection with 2 NIC’s.  Here is a crude example of how this is setup.

Hyper-V architecture

If you are setting this up in a clustered environment, you would want these settings to be identical across all Hyper-V hosts.  Once this is setup correctly make sure you have SCVMM installed and configured. That is a separate process and therefore is out of the scope of this post, I’d be happy to answer any questions you have, I’m just not discussing it here.  You will need to grab the Cisco Nexus 1000v for Hyper-V.  To download the files necessary for installation (let me know if you don’t have one) you will need a valid Cisco ID.  Cisco also provides some documentation as well as some installation videos links but I have found them to be less than helpful to be honest, there is some useful information to be sure, I just want to walk you through the process myself because there were a few caveats and the documentation creates a lot of unnecessary confusion.

There is some basic terminology to be familiar with when getting the 1000v up and going that helps to understand how and why different parts work the way that they do when running through the installation.

  • vsm – virtual superviser module.  This logically controls the virtual switch and can be thought of as a virtual line card to manage the different VEMs.
  • vem – virtual ethernet module.  This is the piece that actually replaces the virtual switch
  • nsm – network segmentation manager.

Once you have the 1000v downloaded you need to make sure you run the installation for it on the server that is hosting SCVMM.  The installer is hidden in the following location,

\Nexus1000v.5.2.1.SM1.5.1\VSM\Installer_App\Cisco.Nexus1000VInstaller.UI.exe

When you run this executable it should bring up a GUI to install and configure the virtual switch(es).  You will need to use an account that is a member of the SCVMMAdmins group in Active Directory, otherwise the installer will not be able to connect to SCVMM and will not be able to create and configure the VM for the new virtual switch.

Authenticate to SCVMM

The next portion of the installer is where things may get confusing if you don’t know what you are looking for.  I have linked to the sample configuration I used in my lab to help with this.  Since this is what I used in my test environment I know at least at one point this configuration worked.  It would be a good idea to deploy the VSM’s in high availability if you can, otherwise it isn’t a big deal.

  • Choose a meaningful name for VSM name, basically this is the same as the host name.  
  • The ISO linstall location is, \Nexus1000v.5.2.1.SM1.5.1\VSM\Install\nexus-1000v.5.2.1.SM1.5.1.iso.  
  • From the documentation I’ve read the VEM MSI location indicated is a little misleading because it points at the wrong installation file.  It should point at \Nexus1000v.5.2.1.SM1.5.1\VMM\Nexus1000V-VSEMProvider-5.2.1.SM1.5.1.0.msi.  
  • The VSM IP address should be an address in your management network, it can basically be thought of as the address to use to connect to the 1000v virtual switch.  
  • Subnet mask should be fine as 255.255.255.0.
  • Gateway IP should match up with the VSM IP address, essentially they just need to be on the same subnet.
  • Domain ID is an arbitrary number that is associated with the virtual network.  For most use cases you should be able to use one ID, 1000 in my example.
  • Use the VLAN ID that your VSM is on, in my case it is my management ID.
  • Since our management VLAN is that same as the VSM VLAN (typical in most deployments) simply choose “Yes” here.

1000v deployment config

At this point everything should be configured, the installer just needs to go out and create the VM’s and take care of getting everything up and running.  It may take awhile so take a break if needed and come back later.

Wait for the installation to finish

Everything should complete successfully, if not you will need to look at the log file and troubleshoot any errors you may have.

Installation summary

Almost done.  Everything should be out there and running but there is still one very important step left.  If you notice, about halfway down the installation summary page there is a username/password of admin and admin.  This obviously will change once the 1000v gets put into use but there is NOTHING in the documentation that tells you that this will break the configuration in SCVMM!

What you need to do is hop on the SCVMM server and manually configure the credentials that are used to connect to the 1000v switch.  To do this, drill down into the security settings in SCVMM by flipping open the Configuration pane -> Security -> Runas accounts -> Right click your 1000v admin account and select properties.

Updating the admin account in SCVMM

Then you will change the username and password to match the credentials that you have set on the 1000v. This will allow the switch to communicate with the SCVMM server so that 1000v network settings can be managed through Hyper-V.

In Part 2 I will discuss the intricacies of configuring the 1000v as well as how to reflect these settings in your Hyper-V virtual environment.  Since this is a brand new product, there are still some things yet  that need to get worked out, especially the documentation.  And as I mentioned earlier, the network settings in Hyper-V and SCVMM can be extremely confusing the first time you see them.   Working through and troubleshooting these issues will quickly help improve your knowledge and understanding of how Hyper-V and the Nexus 1000v work together to improve virtual networking.  If you have any questions or concerns about any of this I will try to help, but I am not promising anything at this point.

Read More

Getting Python Fabric setup in Windows

This has really turned into a wild goose chase.  Initially my goal when I set out on this project was simply to get Fabric up and running so I could test out some different features on some network gear.  It seems like the Python integration in Windows is very different than it is in the Linux world where everything is all bundled up nice and neatly.  There are several separate, seemingly unrelated pieces that all need to fit together to get Python and Fabric working correctly in a Windows environment, which can be very perplexing at first, hence my need to write a post so I don’t have to remember all this complexity for next time.  I thought I might as well show people how I got this to work instead of picking and choosing different bits of information from the internet.

The following is a list of links that I have found to be helpful in getting everything up and going, flip back to here for the different resources and components:

There’s a few steps for getting up and running.  For basic Python functionality it should be enough to download and install Python via the basic installer in your Windows environment.  Accepting the defaults should be enough.  Also, I recommend going with Python 2.7, rather than 3.3 because it has much better backwards compatibility.  You will also want to double check to make sure you download the correct version for you OS as well, either 32-bit or 64-bit.

Once you have your Python install up and going you will want to get pip installed. You will use this tool to get Python modules because it aids tremendously with downloading, managing and installing useful Python code.

So to get up and running with pip, first make sure that you have the correctly matched version of Python and the pip installed for your environment.  For example the 2.7 pip installer will not work with a 3.3 Python installation.  Second, you will need to make sure you have the Distribute package installed in your Python environment as well.  This is the tool that will allow pip to work.  Once you have these modules installed you will need to switch to the directory where pip is installed (or add it to your ENV path variable).  For me it was located in the following location:

C:\Python27\Scripts\pip.exe

So the command to install Fabric would be as follows:

pip.exe install fabric

You would think that’s all you need to get fabric working right?  Well it turns out that using this method we do not have the correct version of Pycrypto installed.

pycrypto error

So using the link posted above go ahead and get the correct version of Pycrypto downloaded and installed (version 2.1.0).  That still doesn’t fix it though!  It just gets us to a different error.  I used this post and this post as a guide for getting the correct version of Pycrypto installed on the Windows machine.

Okay, so now we should have a fully functioning Python environment with Fabric installed.  The only main issue that remains at this point (to my knowledge at least) is that pip still doesn’t work quite right when attempting to install various Python packages.  To get that part working you will need MinGW32 installed (reference above for links).  But that is basically out of the scope of this post, I will write another post about it if there is any interest or you can ask me if you have issues as always.

The only other piece left then is to get Fabric up and going with our Cisco gear.  Take a look at the docs for basic usage on getting acquainted with Fabric, it is fairly straight forward for the most part.

One thing I was not aware of was the way Cisco CLI and devices would behave when using Fabric to control them remotely.  I was having issues with Fabric flaking out whenever I went into config mode on a Cisco switch.  It turns out that when you enter into config mode you are essentially dropped into a new shell and Fabric doesn’t have a nice way to deal with that.  So something like this will bomb out,

def test():
	run("conf t", shell=False)
	run("int 1/0/1", shell=False)
	run("no shut", shell=False)
	run("exit", shell=False)

The “conf t” command opens your new shell and the Cisco gear freaks out because it doesn’t know what to do with the next command.  I should also mention the shell=False is somewhat unrelated to this issue but it gets around Fabric trying to use bash as its default shell.  The workaround?  Use the open_shell command in Fabric and escape each command by using \n to escape to a new line.  So a sample command using this format would be something like the following,

def test():
	open_shell("conf t \n"
		   "ip name-server 1.1.1.1 \n"
		   "exit \n"
		   "exit \n"
		   )

Yeah this is sort of hacky, and I’m not sure if it will be able to do everything I am looking for but hey at least it kind of works.  I am currently looking for a more robust and easier way around this limitation so if you have any suggestions let me know.

Credit goes to markmm on reddit for letting me know about this workaround as well as the people who hang out on the #fabric irc channel on freenode.

Read More