Enable telnet with PowerShell

Every once and awhile you will probably encounter a situation where you need to enable and then use telnet in a security focused environment.  In certain situations telnet can be a great tool to test the functionality of firewall rule. Iif you aren’t certain whether or not a rule is working telnet can be a great way to help debug.  The problem in Server 2008 and above is that telnet isn’t enabled by default.  Luckily with PowerShell it is easy to enable the telnet functionality.

The following set of commands is a quick depiction of how you can enable telnet from a PowerShell prompt to ensure the ability of testing certain ports.  Try it out.

Import-Module servermanager
Add-WindowsFeature telnet-client

Bam!  As always, it is always easier to stay in command prompt and this is a great way to test port connectivity.  I can understand why telnet is disabled by default on fresh server builds but sometimes it can become useful to have telnet as a tool to test connectivity.  If you would like to debate the merits of disabling/enabling telnet on a server just drop me a line, I obviously will not be focusing on this aspect here.  Anyway, just as easily as it is to enable telnet through PowerShell it can be disabled with the following command.  If you already have the server manager module imported, skip to the second command.

Import-Module servermanager
Remove-WindowsFeature telnet-client

That’s all it takes.  Very simple and very straightforward.

 


For more tips and tricks as well as general information about how PowerShell works, check out the venerable Learn Windows PowerShell in a Month of Lunches.

This book is one of my top recommendations on the book recommendations page, especially for learning Powershell and Windows administration.

Read More

Quickly Find Exchange Database Usage

Here is a Powershell script you can use to quickly determine the total amount of space taken up by all of your Exchange database files (edb files) on an Exchange server.  I’d like to note that this may not necessarily be a 100% accurate representation but is a great way to get a ballpark number without having to add the numbers up yourself, manually.

$dbs = Get-MailboxDatabase -Status

foreach($db in $dbs) {

$edbsize = $db.DatabaseSize.Tobytes()
$totalsize += $edbsize

}

Write-Host $totalsize

I noticed that I had no way to calculate the total amount of space being used by my Exchange databases the other day.  And even after scouring through teh Googles I was unable to find what I was looking for quickly so I wrote this script up quick to fix that problem.  Just copy the previous bit of code into a ps1 file with notepad and execute the script from your EMS.  It is a super simple way to iterate through all the databases, save their sizes to a variable and then spit that variable out when it is complete.

Read More

Disable Offline Files in Windows 7

Offline files in Windows are a set of features that essentially give users the ability to work with files off of or outside of the network.  So for example if a user had a laptop that had a mapped drive or network share and were to take their computer outside of the network, the features offered by offline files would allow this user to continue working with these files.  I will not cover the details of how all of this magic works in this post, I just want to show people the best way I found to disable this feature with the least amount of problems.  If you want to go straight from the source, here is the original article the gave me about 95% of the information necessary for accomplishing this task.

The remainder of this post will detail my findings and experience from the link above.  This feature (offline files) is enabled by default in Windows 7.  Here is a good overview of the benefits of offline files.  However, for me personally as an admin, this feature so far has caused much confusion in the work environment for users that are not accustomed to having such a feature in our move towards Windows 7.

These settings can of course be controlled on a per user basis by changing the settings and configuration of the “Sync Center” tool in Windows.  But when you are involved in a larger environment and need this sort of process automated for many users, Group Policy becomes the most effective way to handle this problem.  There are a few steps to get offline folders disabled correctly so I thought I would share all the pieces in case somebody runs across a similar need as I did.  The first step to disable the offline file features is to adjust the following settings in Group Policy:

Computer -> Policies -> Admin Templates -> Network -> Offline files

  • Allow or Disallow use of the Offline Files feature: Disabled
  • Prohibit user configuration of Offline Files: Enabled
  • Sync all offline files when logging on: Disabled
  • Sync all offline files before logging off: Disabled
  • Sync offline files before suspend: Disabled
  • Remove “Make available offline” command: Enabled
  • Prevent use of Offline Files folder: Enabled

Next, we need to tell Group Policy to shut off the offline file service and disable it on all Windows machines that have the service installed (Windows XP, 7, 8 machines).  To do this you will need to modify your Group Policy settings on a machine that has the service installed it already, through RSAT.  This is an important step, you will not be able to find this service if you are adjusting the GP settings from a server.  This service is located in the following location:

Computer Configuration -> Windows Settings -> Security Settings -> System Services

The specific service we are looking for is the “cscservice“, which corresponds to the service labeled “Offline Files” in the Windows services list.

The last step to get this policy working correctly is to add in a registry key that will fix machines that have already been used to cache certain network resources.  Essentially adding this registry key tell the machine to blow up its database of offline files and tells the machine to remove the cached files as well.  To configure this settings we need to add in a custom reg entry:

Computer Configuration -> Preferences -> Windows Settings -> Registry

Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\CSC\Parameters
Value name: FormatDatabase
Value type: DWORD
Value data: 1

Here is a good article with instructions on how change the registry settings by hand and a screenshot of my own GP environment with how the settings should look via the GP Management Console.

Offline files registry entry

That should be all the necessary changes that need to be made.  If I missed anything let me know, hopefully this will save people time in the future.

Read More

Creating new a instance-store AMI for Amazon AWS EC2

This is a HOWTO build your own instance-store backed AMI image which is suitable for creating a Paid AMI. The motivation for doing this HOWTO is simple: I tried it, and it has a lot of little gotchas, so I want some notes for myself. This HOWTO assumes you’re familiar with launching EC2 instances, logging into them, and doing basic command line tasks.

Choosing a starting AMI

There’s a whole ton of AMIs available for use with EC2, but not quite so many which are backed by instance-store storage. Why’s that? Well, EBS is a lot more flexible and scalable. The instance-store images have a fairly limited size for their root partition. For my use case, this isn’t particularly important, and for many use cases, it’s trivial to mount some EBS volumes for persistant storage.

Amazon provides some of their Amazon Linux AMIs which are backed by EBS or instance-store, but they’re based on CentOS, and frankly, I’ve had so much troubles with CentOS in the past, that I just prefer my old standby: Ubuntu. Unfortunately, I had a lot of trouble finding a vanilla Ubuntu 12.04 LTS instance-store backed image through the AWS Console. They do exist, however, and they’re provided by Canonical. Thanks  guys!

Here’s a list of all the 12.04 Precise official AMIs:
http://cloud-images.ubuntu.com/releases/precise/release/

Conveniently, there’s a Launch button right there for each AMI instance. Couldn’t be easier!

Installing the EC2 Tools

Once you’ve got an instance launched and you’re logged in and sudo‘d to root, you’ll need to install the EC2 API and AMI tools provided by Amazon. The first step is, of course, to download them. Beware! The tools available through the Ubuntu multiverse repositories are unfortunately out of date.

The latest EC2 API tools can be found here:
http://aws.amazon.com/developertools/351

The latest EC2 AMI tools can be found here:
http://aws.amazon.com/developertools/368

I like to copy the download link and use wget to download them rather than scp‘ing them from my client machine.

sudo su
mkdir -p /tmp/ec2-tools
cd /tmp/ec2-tools
wget -O ec2-api-tools.zip 'http://www.amazon.com/gp/redirect.html/ref=aws_rc_ec2tools?location=http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip&token=A80325AA4DAB186C80828ED5138633E3F49160D9'
wget -O ec2-ami-tools.zip 'http://s3.amazonaws.com/ec2-downloads/ec2-ami-tools.zip'

Before we can install the EC2 tools, we need to install a few packages that our vanilla Ubuntu is lacking, namely zip and Java.

apt-get install zip
apt-get install openjdk-6-jre-lib
apt-get install ruby

Once we have those installed, we need to unzip our packages and install them to the /usr/local directory.

unzip "*.zip"
find . \( -name bin -o -name lib -o -name etc \) | \
    xargs -I path cp -r path /usr/local

Lastly we have to set the EC2_HOME and the JAVA_HOME environment variables for the EC2 tools to work properly. I like to do this by editing /etc/bash.bashrc so anyone on the machine can use the tools without issue.

echo -e "\nexport EC2_HOME=/usr/local\nexport JAVA_HOME=/usr\n" >> /etc/bash.bashrc

Once we log out and back in, those variables will be set, and the EC2 tools will be working.

# exit
$ sudo su
# ec2-version
1.6.7.4 2013-02-01

Customizing Your AMI

At this point, your machine should be all set for you to do whatever customization you need to do. Install libraries, configure boot scripts, create users, get your applications set up, anything at all. Once you’ve got a nice, stable (rebootable) machine going, then you can image it.

Bundling, Uploading and Registering your AMI

This is actually pretty easy, but I’ll still go through it. The Amazon documentation is fairly clear, and I recommend following along with that as well, as it explains all the options to each command.

Here’s the official Amazon documentation:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-snapshot-s3-linux.html

  1. Create an S3 bucket. This is where you’ll upload your AMI images. If you already have a bucket, you can use that.
  2. Download your AWS security certificates and copy your API keys. They can be found here: https://portal.aws.amazon.com/gp/aws/securityCredentials
  3. Copy your credentials to the instance you’re going to image. First, create a directory to store them in on your instance:
    mkdir -p /tmp/cert
    chmod 777 /tmp/cert
  4. Then copy them from the place you downloaded them on your client machine, to your instance:
    scp -i <keypair_name> pk-*.pem cert-*.pem ubuntu@<host_name>:/tmp/cert
  5. Bundle your instance image. The actual image bundle and manifest will end up in /tmp.
    cd /tmp/cert
    ec2-bundle-vol -k <private_keyfile> -c <certificate_file> \
        -u <user_id> -e <cert_location>
    cd /tmp
  6. Upload your bundled image. Note that <your-s3-bucket> should include a path that is unique to this image, such as my-bucket/ami/ubuntu/my-ami-1, otherwise things will get very messy for you, because an image consists of an image.manifest.xml file and many chunks which compose the image itself, which are generically named by default when you use this tool.
    ec2-upload-bundle -b <your-s3-bucket> -m <manifest_path> \
        -a <access_key> -s <secret_key>
  7. Register your new AMI.
    ec2-register <your-s3-bucket>/<path>/image.manifest.xml -n <image_name> \
        -O <your_access_key> -W <your_secret_key>

That’s it! You should be all set with a new AMI, which should also show up in the AWS Console.

Read More

Nexus 1000v in a Hyper-V 2012 Environment (Part 1)

In the next few posts I will be going over some of the basics on how to get the Nexus 1000v setup and working in a Hyper-V environment.  I must warn readers ahead of time, this product was just released (as of a week or two ago) and the Cisco documentation is seriously lacking.  What documentation that does exist is thoroughly confusing so it may take some time to work through all of the issues.  Just as much if not more irritating, the Hyper-V way of doing things is just as confusing.  Taking on a project like this will surely improve your skills and abilities with virtualization, especially network virtualization.  I must admit, this stuff can get very confusing at first so it is important to realize that you might not understand everything at first, just be patient, it will eventually start making more sense.

First I need to lay some ground work.  I think it’s important not only in this example but a good habit in general to spec out a project and figure out all of the requirements in order to make sure you have everything lined up that you might need before tackling a project.  A few important considerations when working with the 1000v are to make sure the networking and NIC’s on the Hyper-V hosts are set correctly, Virtual Machine Manager (SCVMM) is installed and configured, the network is configured (LACP port channels, trunk ports, correct VLAN assignment, etc) and that configuring all of these pieces won’t cause any downtime or other issues with your production network.  Ideally, all of this would be thought of and set up ahead of time.  Luckily I have a test environment as well as SCVMM in my test environment to test this with and do not have to worry about any real world down time or production issues.

One of the most important things to get established is getting the underlying Hyper-V network stack configured properly.  I try to mimic a production type environment as much as possible so this configuration is a typical design you may see in the real world.  So let’s lay out the structure of the design.

  • Management VLAN(s)
  • DMZ VLAN(s)
  • Inside VLAN(s)
  • Live Migration VLAN(s)

It is common to break these out through different physical connections, so as an example you might see 4 different NIC’s on the Hyper-V host connecting to a switch that has 4 different VLAN’s configured.  If you want redundancy you can add NIC teaming into this scenario (which is native in Server 2012 now, which is nice).  I have limited resources so I am using a single NIC for management, DMZ and live migration traffic, and teaming the inside connection with 2 NIC’s.  Here is a crude example of how this is setup.

Hyper-V architecture

If you are setting this up in a clustered environment, you would want these settings to be identical across all Hyper-V hosts.  Once this is setup correctly make sure you have SCVMM installed and configured. That is a separate process and therefore is out of the scope of this post, I’d be happy to answer any questions you have, I’m just not discussing it here.  You will need to grab the Cisco Nexus 1000v for Hyper-V.  To download the files necessary for installation (let me know if you don’t have one) you will need a valid Cisco ID.  Cisco also provides some documentation as well as some installation videos links but I have found them to be less than helpful to be honest, there is some useful information to be sure, I just want to walk you through the process myself because there were a few caveats and the documentation creates a lot of unnecessary confusion.

There is some basic terminology to be familiar with when getting the 1000v up and going that helps to understand how and why different parts work the way that they do when running through the installation.

  • vsm – virtual superviser module.  This logically controls the virtual switch and can be thought of as a virtual line card to manage the different VEMs.
  • vem – virtual ethernet module.  This is the piece that actually replaces the virtual switch
  • nsm – network segmentation manager.

Once you have the 1000v downloaded you need to make sure you run the installation for it on the server that is hosting SCVMM.  The installer is hidden in the following location,

\Nexus1000v.5.2.1.SM1.5.1\VSM\Installer_App\Cisco.Nexus1000VInstaller.UI.exe

When you run this executable it should bring up a GUI to install and configure the virtual switch(es).  You will need to use an account that is a member of the SCVMMAdmins group in Active Directory, otherwise the installer will not be able to connect to SCVMM and will not be able to create and configure the VM for the new virtual switch.

Authenticate to SCVMM

The next portion of the installer is where things may get confusing if you don’t know what you are looking for.  I have linked to the sample configuration I used in my lab to help with this.  Since this is what I used in my test environment I know at least at one point this configuration worked.  It would be a good idea to deploy the VSM’s in high availability if you can, otherwise it isn’t a big deal.

  • Choose a meaningful name for VSM name, basically this is the same as the host name.  
  • The ISO linstall location is, \Nexus1000v.5.2.1.SM1.5.1\VSM\Install\nexus-1000v.5.2.1.SM1.5.1.iso.  
  • From the documentation I’ve read the VEM MSI location indicated is a little misleading because it points at the wrong installation file.  It should point at \Nexus1000v.5.2.1.SM1.5.1\VMM\Nexus1000V-VSEMProvider-5.2.1.SM1.5.1.0.msi.  
  • The VSM IP address should be an address in your management network, it can basically be thought of as the address to use to connect to the 1000v virtual switch.  
  • Subnet mask should be fine as 255.255.255.0.
  • Gateway IP should match up with the VSM IP address, essentially they just need to be on the same subnet.
  • Domain ID is an arbitrary number that is associated with the virtual network.  For most use cases you should be able to use one ID, 1000 in my example.
  • Use the VLAN ID that your VSM is on, in my case it is my management ID.
  • Since our management VLAN is that same as the VSM VLAN (typical in most deployments) simply choose “Yes” here.

1000v deployment config

At this point everything should be configured, the installer just needs to go out and create the VM’s and take care of getting everything up and running.  It may take awhile so take a break if needed and come back later.

Wait for the installation to finish

Everything should complete successfully, if not you will need to look at the log file and troubleshoot any errors you may have.

Installation summary

Almost done.  Everything should be out there and running but there is still one very important step left.  If you notice, about halfway down the installation summary page there is a username/password of admin and admin.  This obviously will change once the 1000v gets put into use but there is NOTHING in the documentation that tells you that this will break the configuration in SCVMM!

What you need to do is hop on the SCVMM server and manually configure the credentials that are used to connect to the 1000v switch.  To do this, drill down into the security settings in SCVMM by flipping open the Configuration pane -> Security -> Runas accounts -> Right click your 1000v admin account and select properties.

Updating the admin account in SCVMM

Then you will change the username and password to match the credentials that you have set on the 1000v. This will allow the switch to communicate with the SCVMM server so that 1000v network settings can be managed through Hyper-V.

In Part 2 I will discuss the intricacies of configuring the 1000v as well as how to reflect these settings in your Hyper-V virtual environment.  Since this is a brand new product, there are still some things yet  that need to get worked out, especially the documentation.  And as I mentioned earlier, the network settings in Hyper-V and SCVMM can be extremely confusing the first time you see them.   Working through and troubleshooting these issues will quickly help improve your knowledge and understanding of how Hyper-V and the Nexus 1000v work together to improve virtual networking.  If you have any questions or concerns about any of this I will try to help, but I am not promising anything at this point.

Read More