Monitor your Exchange disk sizes

A word to the wise.  If you all of a sudden are unable to send and receive email messages in your Exchange environment, take a look and make sure the Exchange server disks aren’t being filled up.  Today I ran across an interesting (and by interesting I mean that this could have caused a serious outage) issue where Windows updates were very routinely being downloaded for our next patch management installation cycle but unknowingly were also causing our email services to stop functioning correctly.  I am thankful the scenario didn’t get ugly and luckily this event gives me the opportunity to talk about a few of things that I think might be useful for readers and other admins.

It turns out that this month’s wave of Windows updates caused the disks on our Hub Transport servers to quietly fill up during the day, unbeknownst to any of the admins.  In normal circumstances this process is by design and almost never becomes an issue, however in this case there was not enough disk available for Exchange to work correctly.  This could have been disastrous had we not known that the disk was starting to fill up.  We could have been chasing our tails for a much longer period of time and the situation could have escalated to a more stressful situation.  For some reason, the company likes to be able to send and receive emails.  Thank god for monitoring that works.

There are a couple things that need to be investigated at this point.  First, had we not known that the Windows updates were what were causing the disk to fill up, a logical place to start looking for clues would be to examine the log files on the suspect servers.  I would like to take a little bit of time and quickly go over some steps for looking at logs in an Exchange environment, when thinking about potential disk space issues a few things come to mind.  Are log files growing rapidly?  Did somebody turn on verbose logging and accidentally forget to turn it off?  To verify the logs aren’t the issue there are a few places that are good to look.  If you are familiar with or have ever used message tracking in Exchange you know how powerful it can be.  Sometimes that can also potentially be an issue with your disk filling up.  Here is the location that these message tracking logs are stored:

C:\Program Files\Microsoft\Exchange Server\V14\TransportRoles\Logs\MessageTracking

Another location that gets used when you turn on verbose logging for troubleshooting send or receive connectors are the smtpsend and smtpreceive directories.  These can fill up quite quickly if you forget to turn off verbose logging on a send or receive connector when are you done troubleshooting.  This location is here:

C:\Program Files\Microsoft\Exchange Server\V14\TransportRoles\Logs\ProtocolLog

Finally, there is a location for logging protocol settings on the hub transport.  These logs can be found here:

C:\Program Files\Microsoft\Exchange Server\V14\TransportRoles\Logs\ProtocolLog

I would like to point out quickly that any and all of the behaviors of these logging methods can be modified using the Exchange Management Shell, and sometimes for more detailed settings can only be modified by the EMS.

If these quick spot checks don’t uncover any immediate problems another good technique to help gain some insight into where your disk space issues are is to use a tool that enumerates file locations and file sizes.  There are a few tools available, one of them I like to use is Space Sniffer.  It is fast, easy to use and gives a good visual representation of directory sizes and file sizes.  The tool can do much more but in this case we are just interested in finding the disk issue quickly.  We were able to quickly find that the size and contents of the %windir%\softwaredistribution\download folder were growing rather quickly.  I just happen to know that this is the temporary location that Windows uses to store Windows update files before they are installed.

There are a few things that can be done here.  You can either clear the temporary Windows updates files, delete other unnecessary files or you can grow your disks.  We were lucky because our Hub Transport servers are VM’s and increasing the disk size of these servers is simple.  That seems like the best option if it is a possibility, just in case something like this happens again we will have the additional space so the Exchange servers won’t bog down.

Ultimately we prevented the disaster from occurring but the incident is a great illustration of the lesson I’d like to share.  Make sure you have a good monitoring and alerting solution in place.  Otherwise you may not have any clue where to start looking.  If we did not have a reliable monitoring tool in place it would have been much more difficult to track this problem down in the first place because our Exchange environment is large and complex.  Because we have good monitoring tools we were able to quickly identify the problem and resolve it before anything bad happened.  On a side note, I am still thinking about how we can take this monitoring and alerting one step further in the future to become proactive instead of reactive but for now the monitoring tools are doing their job and because of this we avoided a potential disaster.  If you have any thoughts on proactive monitoring and alerting relating to these types of disk issues let me know, I’d love to hear how you handle it.

Read More

Why Computer Science degrees translate to System Administration

I run across a lot of articles and posts that talk about how a degree in Computer Science is usually irrelevant to system administration and that you are just as well off with another degree or no degree at all. I think that line of logic is very short sighted and today I am going (or at least attempt) to explain why. By no means am I criticizing these approaches, in fact I believe in the logic that there is more than one way to skin a cat, and I have found many other highly successful admins that have reached their positions by these alternate means. I just want to quickly clarify that I am not advising readers that taking the CS route to becoming a system admin is the only, right way to go, I am simply relating my own experiences in system administration to my background in CS and making a case of why pursuing a degree in Computer Science, or any other degree in engineering for that matter isn’t going to hurt your chances of becoming a sysadmin.

When you think of Computer Science you think of programming or maybe math, at least I do. Most CS programs these days have a heavy orientation towards programming and the scientific and mathematic applications of programming as it applies to the world around us. As an aside, I am beginning to see many more programs that are tailored to specific disciplines inside the realm of IT which looks promising. This is a great hybrid approach in my opinion because it gives students a chance to look at a few alternate options. Coding isn’t my passion so having an option to become a system administrator without the amount of intense coding from a CS program looks like an attractive approach.

It is true that many of the mundane daily tasks related to system administration don’t involve 8 hours a day of reading and writing code. Because of this I think it is important to characterize and distinguish a sysadmin as somebody who relies on software tools and programming to solve problems and technical challenges but doesn’t necessarily devote all of their time and energy to living in and interacting with code. The relationship of the sysadmin to programming is more of an indirect one, though still very important.

The farther along I wander on in my journey as a sysadmin the more I realize how the CS background is helping me.  I have a solid foundation in many of the core concepts that were taught through the CS program, which in turn  have indirectly influenced my abilities as a system administrator for the better. The first and most valuable asset my CS background has given me is the ability to write and understand code.  This is extremely useful in my daily slew of activities.  It allows me to approach problems with a programmatic methodology, it allows me to automate redundant and repeatable tasks with scripts, it gives me intuition into why databases or programs are slow, it allows me to debug issues systematically, and on and on.  Obviously these skills can be learned elsewhere but having them rolled up into your education when you learn about Computer Science as part of the package deal is very convenient.  I would much rather have this set of skills and have the ability to look at things from a different perspective than have to learn each of these techniques separately.  There is no way that somebody coming from a business or other similar background will know about silly things like big O notation or how different algorithms work at a fundamental level, it just isn’t part of their background so they don’t spend time thinking about these things.

This really parlays into other areas well and you are setting yourself up for a diversified and broad horizon for future employment prospects. For example, take a pure sysadmin that knows no programming or CS; at their core they know system administration. But what if they either get burnt out (which is common in this profession) or they don’t keep up the skills to match their position? There is nowhere in the industry for these individuals to turn, unless they want to go into management. That is why I believe individuals that choose not to further their careers are essentially crippling themselves and their future prospects by not knowing how or learning to program, or to at least understand how system administration and programming can relate to each other. With a diverse background the CS sysadmin could potentially move into a Devops role, a pure programming and development role or a management role. With the diverse IT ecosystem, programming and development skills are very much saught after and so the demand is high for these other types of positions and sets of skills.

Another well known fact in the IT industry, which I don’t necessarily agree with but nonetheless exists, is the fact that just having a CS degree will open doors that may not otherwise be open without a degree. I personally believe that a degree shouldn’t dictate your position but by having a degree you set yourself up for some unique opportunities and certainly are not hurting yourself. For example, all other things being equal, somebody scanning through resumes has to select an individual applicant that either has a degree in Computer Science or a degree in Philosophy. Which do you think will be picked? Like I said, I don’t think the hiring process is fair or even has anything to do with skill but can be used as a way to get ahead of the competition in the hiring process and can therefore a degree be valuable by itself as well as viewed as a strategic component in the hiring process if nothing else.

Here’s what I am saying. You don’t have to have a degree in Computer Science to be a great System Administrator. But the CS background definitely equips you with the tools to both understand some of the more abstract technical concepts and ideas and give you a robust framework working through and solving these difficult and complex problems. Ultimately the most important factors in being a good sysadmin (let alone anything else) is a combination of many different things, including a willingness to learn and the amount of experience an individual possesses. There is no cookie cutter way to build the perfect sysadmin and you will invariably find a very diverse group of people in this profession, but a head start with a CS degree is certainly one path that won’t hurt you and is a good attribute of many good sysadmins.

Read More

Reflections on the year

It is the time of year again to reflect on some of the things that happened during 2013.  As usual, it is impossible to predict what will happen in the future and what kinds of experiences will shape you and what kinds of difficult challenges you will encounter and overcome.  Luckily in 2013 there weren’t any challenges that I wasn’t able to overcome in one way or another.

There was a lot that happened in the past year that is worth going over.  The first main thing I’d like to mention is that I hit my 2nd full year of blogging, which was really exciting for me.  I have nearly 100 blog posts published to date and I really feel like I am just getting started.  I began to experiment a lot more with the format and content of the blog and I have found that to be enjoyable.  I have also begun to experiment with different techniques to monetize the blog, which has been interesting to me as well.  I think that it will be really fun to see what happens with all of the different ways the blog is growing in the coming year.  One thing I would like to see more of are some unique perspectives from other sysadmin/IT bloggers because I feel like it will really spark some other areas of growth.

Other high notes of the year include my first trip to Cisco Live! which was a great experience, I learned so much from that conference and it wound up being a great trip.  I have taken on more responsibilities in my current position.  I have begun implementing some fun interesting techniques and projects as well, including a fully featured testing environment with load balancers, SAN, clustered Hyper-V, SQL, etc.  That was been a great tool not only for myself and my own experience learning the technologies but has been a valuable tool for the organization as well to help prototype and test potential technologies.  This past year has also been valuable from a networking standpoint, I took part in a full blown wireless upgrade project, I helped with the management and move forward plan with our current switches, and in general learned a ton of new stuff about networking technologies that I did not see myself learning, which has been valuable and fun for me.

While things went well for the most part there is always room for improvement.  Areas of improvement for next year include more involvement in automation, for one.  I am really getting a good taste now of automation and I think it will be huge for my career growth as well as a benefit to my current employer.  I would also like to see myself involved in more (people) networking, whether it be through conferences or other user group gatherings.  I think networking with other IT pros is something I need to continue to work on.

Finally, outside of work I have some other stuff I’m working on getting up off the ground that I’d like to mention.  First, and most excitingly for me is my side business;  I repair mobile devices, iPads, iPhones, Android, etc.  The learning experience from that project has been great so far and I would really like to expand some of things I’m doing with it into the next year.  Part of getting this up and going will be learning how to develop Android and iOS apps, building a repair tracking system, and learning much more of the nuances that go into running a business that I had no idea about before I started this project.  Last but not least, I met my wonderful girlfriend.  She has been a true blessing to me so far and I just wanted to get her a shout out while I am writing this up.  So to bring things together here, I am really looking forward to all of the rewards and opportunities that go along with hard work and persistence.

There will be more of the same this coming year and I am excited for it.  From career goals to personal projects, I would like to see myself continuing to learn, continuing to improve processes and continuing to become a person that can take on responsibilities and people can depend on to get things done.  I know it will be hard work and won’t always be fun but I know it will be worth it.  Next year should be fun, so until then have a happy new years!

Read More

Design Group Policy for easy troubleshooting

I tend to see a lot of one off fixes for setting up and fixing group policies that either don’t exist or are intended for policies that are broken the majority of the time when I am looking up GP answers on teh google’s.  I recently watched a great video over at the channel9 website by Daren Mar-Elia of GPOguy fame about using best practices and design principles for managing your Group Policy environment.  Here is the link to that video.

That video really got me thinking about the topic of how I could improve my GP management skills in my day to day environment.  So I decided that I would take as many offerings from his talk and elsewhere in my searches across the interwebz to help come up with some of my own best practices and guidelines for managing Group Policy.

The following is an overview of the ideas and techniques that I came up with and what has worked well in my experience with regards to managing Group Policy.

Group Policy organizational best practices:

  • Use either a “U” “S” or “C” to denote whether Group policy is User, Server or Computer
  • Tack on a version at the end of the specific Group Policy.  Brand new Group Policies begin at v1.0
  • Every time a policy changes increment the version number.  It makes things easier to troubleshoot when using gpresult with this method
  • Each GPO has one specific use case.  DO NOT LUMP MULTIPLE FUNCTIONS INTO ONE POLICY
  • Use very detailed and descriptive names to denote what a GPO is and does

Here are some example policies that I have been working on in a test environment.  I think it captures many of these above best practices quite nicely.  Please feel free to adapt this technique to suit your own specific needs, this is only a template and I’d like to see how it can be improved.

Group Policy best practices

As you can see, using this format it is easy to tell whether or not this is a computer policy, what specifically the policy is doing and which version of the policy we’re at currently.

The most crucial part of using this system is to get other Group Policy admins to buy in to this technique.  If you don’t clearly lay out your expectations then keeping policies up to date and organized could potentially become a pain point looking on down the road.  The other caveat is to get the other GP admins in the habit of creating policies that address only one specific task, that are broken into either user or computer policies and have descriptive names.  If the environment utilizes multi-purpose policies that contain both user and computer specific settings then this may be a new concept for many of the admins but the extra effort in setting this type of environment up will be totally worth the extra overhead initially.

I definitely think that this technique can be improved and I am always tinkering with it to see how I can get it to work better but for now it is at a good point.  If you make the transition to organizing and improving your management of Group Policy or just have some solid best practices of your own already let me know, I would love to hear about what you are doing and how to incorporate more techniques into my own management style.

Read More

Gathering Exchange 2010 mail flow statistics

Tech Tip: Work on your windows compatible monitoring software with SharePoint or Exchange Add-on using cloud based virtual desktops accessible remotely from your (PC/Mac/iOS/Android) devices with CloudDesktopOnline.com and get complete access to MS office on the same desktop by visiting O365CloudExperts.com powered by Apps4Rent.

There are times when it can be useful and beneficial to have a good grasp on the details of what kind of mail traffic is running through your Exchange environment.  Recently I have been tasked with coming up with some environmental statistics for our Exchange 2010 servers to help size a new project we are starting soon.  There are a few different tools to help gather this information that I’d like to briefly go over today.  Before I start I’d like to point out that most of this stuff I am borrowing from others, however I think it is valuable to know how to do this type of thing.  With that said, I’m definitely not trying to take credit for any of these techniques, just trying to show the benefits.

There are a few different tools that will help to get a handle on your Exchange environment.  The first and quickest way to peer into your Exchange environment for some quick high level overview statistics is to use PowerShell.

The following command can be used to grab some basics statistics such as the total mailbox size, average mailbox size, the max and the minimum sizes in your environment.

Get-Mailbox -Database MBDB1 | Get-MailboxStatistics | %{$_.TotalItemSize.Value.ToMB()} | Measure-Object -sum -average -max -min

It is important to note however that this command can take some time to complete and can be an intensive process because there are so many calculations going on, just be careful that you don’t crash anything.  This command may not be viable if the environment is enormous but if that is the case you probably don’t need to use any of these techniques anyway.

The next useful tool to gather up mail flow information uses the Microsoft Log Parser tool, which can be downloaded here.  The log parser basically allows us to query the Exchange message transport logs to pull out interesting information.  I found a great blog post that describes the process of using the log parser tool to query the message tracking logs to help determine daily send and receive traffic in your Exchange environment.  You can find the blog post here and I have it reference at the end of this article as well.

There are a few tricks however that I would like to mention because a few things in the blog post aren’t exactly obvious.  After downloading and installing the Log Parser you must run the command he has listed on his site using CMD, otherwise you will have to modify his commands to use PowerShell.

For this command to work correctly you must also navigate to the correct location where the transport logs are being stored.  In the default install of Exchange they are stored in:

C:\Program Files\Microsoft\Exchange Server\V14\TransportRoles\Logs\MessageTracking

So after you navigate to the correct location you run the command:

"C:\Program Files (x86)\Log Parser 2.2\logparser.exe" "SELECT TO_LOCALTIME(TO_TIMESTAMP(EXTRACT_PREFIX(TO_STRING([#Fields: date-time]),0,'T'), 'yyyy-MM-dd')) AS Date, COUNT(*) AS Hits from *.log where (event-id='RECEIVE') GROUP BY Date ORDER BY Date ASC" -i:CSV -nSkipLines:4 -rtp:-1

This will output the total number of send/receive messages for each date for the last 30 days on that particular server.  Another important thing to keep in mind is that you need to run this command on each server that has either the Hub Transport or Edge Transport role installed because each server houses a unique set of log files.

The last technique I’d like to go over for gathering interesting Exchange mail flow information is a script I found online, which can found here.  This is a very robust script that gathers a lot of specific information for a particular set of logs files.  Essentially this script functions similarly to the above Log Parser, except it grabs a lot more detail for a particular date.

This is easy to get working, just copy the script from the link into a .ps1 file and save it to a server that has the Exchange Management Shell installed on it.  If the EMS is not installed then this script will not function correctly.  The script will output some interesting details for each individual user including things like:

  • Username
  • Messages sent/received
  • Total MB sent/received
  • Internal sent/received stats
  • Unique messages sent

And output this information into a CSV file so it easy to manipulate the data at that point.  This kind of stuff is very useful in helping to determine things like average sent and received message size for example, I have not been able to provide that information to management easily until I found this script.

There are more techniques out there I’m sure, maybe even software that helps gather these sorts of stastics and information but for a quick and dirty way to grab some high level statistics you can’t really beat these techniques.  These methods are quick and will get you the information you need, which more often than not seems to be at least as detailed as the people requesting this information are looking for which is a win-win for everybody.  If you have any other input or questions about mail flow statistics feel free to let me know.

Resources:

http://exchangeserverpro.com/daily-email-traffic-message-tracking-log-parser/
http://exchangeserverpro.com/exchange-2010-message-tracking/
http://gallery.technet.microsoft.com/scriptcenter/bb94b422-eb9e-4c53-a454-f7da6ddfb5d6

Read More