Quickly get Node.js up and running on Windows

Installing software on Windows in an automatable, repeatable and easy way in Windows has always been painful in the past.  Luckily, in recent years there have been some really nice additions to Windows and its ecosystem that have improved the process significantly.  The main tools that ease this process are Powershell and Chocolatey and these tools have significantly improved the developer  and administrative experiences in Windows.

In the past, in order to install something like a programming language and its environment you would have to manually download the zip or tar file, extract it, put it in the correct place, set up environment variables and system paths manually, etc.  Things would also break pretty easily and it was just painful in general to work with.

Hopefully you are already familiar with Powershell at least because I won’t be covering it much in this post.  If you have any recent version of Windows you should have Powershell.  Below I describe Chocolately a little bit and why it is useful so you can find out more in the post or you can check out the Chocolately website, which does a much better job of explaining its benefits, how it is used and why package managers are good.

Update Windows execution policy

This process is pretty straight forward.  Make sure you open up a Powershell prompt with admin privileges, otherwise you will run into problems.  The first step is to change the default system execution policy (if you haven’t already).  On a fresh install of Windows, you will need to loosen up the security in order to install Chocolatey, which will be used to install and mange Node.js.  Luckily there are just a few Powershell commands that need to run.  To check the status of the execution policy, run the following.

Get-ExecutionPolicy
Restricted

This should tell you what your execution policy is currently set to.  To loosen the policy for Choco, run the following command.

Set-ExecutionPolicy -ExecutionPolicy RemoteSigned

Follow the prompt and choose [Y] to update the policy.  Now, if you run Get-ExecutionPolicy you should see RemoteSigned.

Get-ExecutionPolicy
RemoteSigned

If you don’t have your execution policy opened up to at least RemoteSigned, you will have trouble installing things from the internet, including Chocoloatey.  You can find more information about Execution Policies here if you don’t trust me or just want a better idea of how they work.

Install Chocolatey

If you aren’t familiar, Chocolately is a package manager for Windows, similar to apt-get on Debian Linux systems or yum on Redhat based systems.  It allows users to quickly and easily install and manage software packages on Windows platforms through Powershell.

The steps to installing are Chocolatey are listed below.

iwr https://chocolatey.org/install.ps1 -UseBasicParsing | iex

This command will take care of pretty much all of the setup so just watch it do its thing.  Again, make sure you are inside of an elevated admin shell, otherwise you will likely have problems with the installation.

Install Node.js

The last step (finally) is to install Node.js.  Luckily this is the easiest part.  Just run the following command.

choco install nodejs.install

Choose [Y] to accept that you want to run the install script and let it run.  There should be some colored output and when it is done Node should be installed on your system.  You will need to make sure you close and re-open you Powershell prompt to get the Node binaries to be picked up on your PATH, or just source the shell by running “RefreshEnv” to pick up the new path.  If you are in an admin shell I would recommend dropping out of it by simply closing the current session and opening up a new, non privileged session.

install node

Once you have a fresh shell you can test that Node installed properly.

node -v
v6.6.0

Now you are ready to go.  It only took a few minutes with the Choco package manger.  If you are new to Node in general and are looking for a good resource, the learn you the node project on github is pretty decent.

Let me know if you have any caveats to add to this method, it is the easiest and fastest way I have found to installing Node as well as other pieces of software in Windows without any hassle.

Read More

Exchange Transport Service won’t start

Due to an outage this weekend, I’d like to take a minute to briefly describe the scenario that occurred and how it was resolved.  If you are having trouble starting your Exchange Transport Service then you may potentially be running into the same issue I was having during the outage.  Luckily there is an easy remedy for the service failing to start.  Basically what was happening was the Exchange message queue database was beginning to fail due to some sort of corruption, causing the Transport service to fail.  Because the Transport service wasn’t running, the Edge Sync process was failing, causing external mail delivery to fail.  Obviously a big issue, since you cannot receive any email from external domains if this is not working correctly.

To troubleshoot this, there are a few obvious signs that you should look at first.  The main thing you should check first is your disk sizes, I wrote about it in my previous post.  If your disks are full or are filling up then you are pretty much dead in the water and will need to fix your disk issue.  In my scenario the disk sizes were not an issue so the next tool I turned to were the logs.  I found a number of interesting entries in the Windows Application Event logs that gave me some clues.  I want to detail as many of these messages as I can so that people who are having similar issues know what to look for.

Transport error Transport error Transport error Transport error

There are a few possible resolutions to this problem.  Through some Google searches one solution I found is that you can attempt to repair the corruption in the queue databases by running the database through ESE util.  There is no guarantee this will work and it can potentially take a lot of time, depending on the size of your queue database. There is some good information here about the mail queue and how it works.

If you decide to repair the database, the mail queue file is located in the following location:

C:\Program Files\Microsoft\Exchange Server\V14\TransportRoles\data\Queue

Inside this directory is a file called tmp.edb.  This is the file that you will need to repair.

The other method is much simpler and was the solution I went with.  Instead of attempting to repair the database corruption, simply copy and rename the queue folder and restart the Transport service.  Doing this will force the Transport service to create a new, fresh copy of the database queue along with all of the accompanying config files and associated items that are required to get things up and running.  It is faster and simpler, IMO.  The only problem with this approach is that items that were stuck in the queue when the database corruption occurred will be lost.  For me, this was an acceptable loss.  If not, you will probably have to use the first method and attempt to repair the database or try to somehow work with a shadow copy or backup somehow to get unstuck.

Read More

Monitor your Exchange disk sizes

A word to the wise.  If you all of a sudden are unable to send and receive email messages in your Exchange environment, take a look and make sure the Exchange server disks aren’t being filled up.  Today I ran across an interesting (and by interesting I mean that this could have caused a serious outage) issue where Windows updates were very routinely being downloaded for our next patch management installation cycle but unknowingly were also causing our email services to stop functioning correctly.  I am thankful the scenario didn’t get ugly and luckily this event gives me the opportunity to talk about a few of things that I think might be useful for readers and other admins.

It turns out that this month’s wave of Windows updates caused the disks on our Hub Transport servers to quietly fill up during the day, unbeknownst to any of the admins.  In normal circumstances this process is by design and almost never becomes an issue, however in this case there was not enough disk available for Exchange to work correctly.  This could have been disastrous had we not known that the disk was starting to fill up.  We could have been chasing our tails for a much longer period of time and the situation could have escalated to a more stressful situation.  For some reason, the company likes to be able to send and receive emails.  Thank god for monitoring that works.

There are a couple things that need to be investigated at this point.  First, had we not known that the Windows updates were what were causing the disk to fill up, a logical place to start looking for clues would be to examine the log files on the suspect servers.  I would like to take a little bit of time and quickly go over some steps for looking at logs in an Exchange environment, when thinking about potential disk space issues a few things come to mind.  Are log files growing rapidly?  Did somebody turn on verbose logging and accidentally forget to turn it off?  To verify the logs aren’t the issue there are a few places that are good to look.  If you are familiar with or have ever used message tracking in Exchange you know how powerful it can be.  Sometimes that can also potentially be an issue with your disk filling up.  Here is the location that these message tracking logs are stored:

C:\Program Files\Microsoft\Exchange Server\V14\TransportRoles\Logs\MessageTracking

Another location that gets used when you turn on verbose logging for troubleshooting send or receive connectors are the smtpsend and smtpreceive directories.  These can fill up quite quickly if you forget to turn off verbose logging on a send or receive connector when are you done troubleshooting.  This location is here:

C:\Program Files\Microsoft\Exchange Server\V14\TransportRoles\Logs\ProtocolLog

Finally, there is a location for logging protocol settings on the hub transport.  These logs can be found here:

C:\Program Files\Microsoft\Exchange Server\V14\TransportRoles\Logs\ProtocolLog

I would like to point out quickly that any and all of the behaviors of these logging methods can be modified using the Exchange Management Shell, and sometimes for more detailed settings can only be modified by the EMS.

If these quick spot checks don’t uncover any immediate problems another good technique to help gain some insight into where your disk space issues are is to use a tool that enumerates file locations and file sizes.  There are a few tools available, one of them I like to use is Space Sniffer.  It is fast, easy to use and gives a good visual representation of directory sizes and file sizes.  The tool can do much more but in this case we are just interested in finding the disk issue quickly.  We were able to quickly find that the size and contents of the %windir%\softwaredistribution\download folder were growing rather quickly.  I just happen to know that this is the temporary location that Windows uses to store Windows update files before they are installed.

There are a few things that can be done here.  You can either clear the temporary Windows updates files, delete other unnecessary files or you can grow your disks.  We were lucky because our Hub Transport servers are VM’s and increasing the disk size of these servers is simple.  That seems like the best option if it is a possibility, just in case something like this happens again we will have the additional space so the Exchange servers won’t bog down.

Ultimately we prevented the disaster from occurring but the incident is a great illustration of the lesson I’d like to share.  Make sure you have a good monitoring and alerting solution in place.  Otherwise you may not have any clue where to start looking.  If we did not have a reliable monitoring tool in place it would have been much more difficult to track this problem down in the first place because our Exchange environment is large and complex.  Because we have good monitoring tools we were able to quickly identify the problem and resolve it before anything bad happened.  On a side note, I am still thinking about how we can take this monitoring and alerting one step further in the future to become proactive instead of reactive but for now the monitoring tools are doing their job and because of this we avoided a potential disaster.  If you have any thoughts on proactive monitoring and alerting relating to these types of disk issues let me know, I’d love to hear how you handle it.

Read More

Design Group Policy for easy troubleshooting

I tend to see a lot of one off fixes for setting up and fixing group policies that either don’t exist or are intended for policies that are broken the majority of the time when I am looking up GP answers on teh google’s.  I recently watched a great video over at the channel9 website by Daren Mar-Elia of GPOguy fame about using best practices and design principles for managing your Group Policy environment.  Here is the link to that video.

That video really got me thinking about the topic of how I could improve my GP management skills in my day to day environment.  So I decided that I would take as many offerings from his talk and elsewhere in my searches across the interwebz to help come up with some of my own best practices and guidelines for managing Group Policy.

The following is an overview of the ideas and techniques that I came up with and what has worked well in my experience with regards to managing Group Policy.

Group Policy organizational best practices:

  • Use either a “U” “S” or “C” to denote whether Group policy is User, Server or Computer
  • Tack on a version at the end of the specific Group Policy.  Brand new Group Policies begin at v1.0
  • Every time a policy changes increment the version number.  It makes things easier to troubleshoot when using gpresult with this method
  • Each GPO has one specific use case.  DO NOT LUMP MULTIPLE FUNCTIONS INTO ONE POLICY
  • Use very detailed and descriptive names to denote what a GPO is and does

Here are some example policies that I have been working on in a test environment.  I think it captures many of these above best practices quite nicely.  Please feel free to adapt this technique to suit your own specific needs, this is only a template and I’d like to see how it can be improved.

Group Policy best practices

As you can see, using this format it is easy to tell whether or not this is a computer policy, what specifically the policy is doing and which version of the policy we’re at currently.

The most crucial part of using this system is to get other Group Policy admins to buy in to this technique.  If you don’t clearly lay out your expectations then keeping policies up to date and organized could potentially become a pain point looking on down the road.  The other caveat is to get the other GP admins in the habit of creating policies that address only one specific task, that are broken into either user or computer policies and have descriptive names.  If the environment utilizes multi-purpose policies that contain both user and computer specific settings then this may be a new concept for many of the admins but the extra effort in setting this type of environment up will be totally worth the extra overhead initially.

I definitely think that this technique can be improved and I am always tinkering with it to see how I can get it to work better but for now it is at a good point.  If you make the transition to organizing and improving your management of Group Policy or just have some solid best practices of your own already let me know, I would love to hear about what you are doing and how to incorporate more techniques into my own management style.

Read More

Gathering Exchange 2010 mail flow statistics

Tech Tip: Work on your windows compatible monitoring software with SharePoint or Exchange Add-on using cloud based virtual desktops accessible remotely from your (PC/Mac/iOS/Android) devices with CloudDesktopOnline.com and get complete access to MS office on the same desktop by visiting O365CloudExperts.com powered by Apps4Rent.

There are times when it can be useful and beneficial to have a good grasp on the details of what kind of mail traffic is running through your Exchange environment.  Recently I have been tasked with coming up with some environmental statistics for our Exchange 2010 servers to help size a new project we are starting soon.  There are a few different tools to help gather this information that I’d like to briefly go over today.  Before I start I’d like to point out that most of this stuff I am borrowing from others, however I think it is valuable to know how to do this type of thing.  With that said, I’m definitely not trying to take credit for any of these techniques, just trying to show the benefits.

There are a few different tools that will help to get a handle on your Exchange environment.  The first and quickest way to peer into your Exchange environment for some quick high level overview statistics is to use PowerShell.

The following command can be used to grab some basics statistics such as the total mailbox size, average mailbox size, the max and the minimum sizes in your environment.

Get-Mailbox -Database MBDB1 | Get-MailboxStatistics | %{$_.TotalItemSize.Value.ToMB()} | Measure-Object -sum -average -max -min

It is important to note however that this command can take some time to complete and can be an intensive process because there are so many calculations going on, just be careful that you don’t crash anything.  This command may not be viable if the environment is enormous but if that is the case you probably don’t need to use any of these techniques anyway.

The next useful tool to gather up mail flow information uses the Microsoft Log Parser tool, which can be downloaded here.  The log parser basically allows us to query the Exchange message transport logs to pull out interesting information.  I found a great blog post that describes the process of using the log parser tool to query the message tracking logs to help determine daily send and receive traffic in your Exchange environment.  You can find the blog post here and I have it reference at the end of this article as well.

There are a few tricks however that I would like to mention because a few things in the blog post aren’t exactly obvious.  After downloading and installing the Log Parser you must run the command he has listed on his site using CMD, otherwise you will have to modify his commands to use PowerShell.

For this command to work correctly you must also navigate to the correct location where the transport logs are being stored.  In the default install of Exchange they are stored in:

C:\Program Files\Microsoft\Exchange Server\V14\TransportRoles\Logs\MessageTracking

So after you navigate to the correct location you run the command:

"C:\Program Files (x86)\Log Parser 2.2\logparser.exe" "SELECT TO_LOCALTIME(TO_TIMESTAMP(EXTRACT_PREFIX(TO_STRING([#Fields: date-time]),0,'T'), 'yyyy-MM-dd')) AS Date, COUNT(*) AS Hits from *.log where (event-id='RECEIVE') GROUP BY Date ORDER BY Date ASC" -i:CSV -nSkipLines:4 -rtp:-1

This will output the total number of send/receive messages for each date for the last 30 days on that particular server.  Another important thing to keep in mind is that you need to run this command on each server that has either the Hub Transport or Edge Transport role installed because each server houses a unique set of log files.

The last technique I’d like to go over for gathering interesting Exchange mail flow information is a script I found online, which can found here.  This is a very robust script that gathers a lot of specific information for a particular set of logs files.  Essentially this script functions similarly to the above Log Parser, except it grabs a lot more detail for a particular date.

This is easy to get working, just copy the script from the link into a .ps1 file and save it to a server that has the Exchange Management Shell installed on it.  If the EMS is not installed then this script will not function correctly.  The script will output some interesting details for each individual user including things like:

  • Username
  • Messages sent/received
  • Total MB sent/received
  • Internal sent/received stats
  • Unique messages sent

And output this information into a CSV file so it easy to manipulate the data at that point.  This kind of stuff is very useful in helping to determine things like average sent and received message size for example, I have not been able to provide that information to management easily until I found this script.

There are more techniques out there I’m sure, maybe even software that helps gather these sorts of stastics and information but for a quick and dirty way to grab some high level statistics you can’t really beat these techniques.  These methods are quick and will get you the information you need, which more often than not seems to be at least as detailed as the people requesting this information are looking for which is a win-win for everybody.  If you have any other input or questions about mail flow statistics feel free to let me know.

Resources:

http://exchangeserverpro.com/daily-email-traffic-message-tracking-log-parser/
http://exchangeserverpro.com/exchange-2010-message-tracking/
http://gallery.technet.microsoft.com/scriptcenter/bb94b422-eb9e-4c53-a454-f7da6ddfb5d6

Read More