If you are having trouble installing previous versions of the .NET framework on Windows Server 2012 and above, or are receiving the following error listed below, use this set of instructions.
Find and mount the original Windows Server 2012 ISO. An important thing to make note of is the drive letter that the image gets mounted as.
Once the ISO is mounted and we have correctly identified the drive path we need to enable the .NET Framework. This can be done through command line using the dism.exe command or alternatively throught the Windows GUI.
First we need to enable the .NET features on the server. Notice in the command I am using the F: drive.
Whether you are new to system administration or have spent some time in the role, getting a grip on documentation can be tricky and often times challenging. I can speak from experience because when I started my career I had no idea how to do this stuff, but slowly over time I have been able to develop some useful techniques. It also helps that I am obsessed with documentation. Therefore the techniques I use are always changing and I am always looking for ways to improve my documentation skills. I think the most important take away from what I have learned, is that there is no end all be all way to do documentation and everybody will probably find slightly different ways. My method may or may not work for you but hopefully you can look at it and use some of the underlying concepts to help with your own style. I have been able to use my own documentation procedures over the past 3 years to really improve my job performance as well as other areas in my job so hopefully there is something in here you can apply to your own situation.
Once you get past the initial stage of how uncomfortable it is to document everything by your mindset to how useful it is to having a living reference for everything on the network, then documentation really becomes much more manageable. The added bonus of solid documentation is that once you have a decent repository built up you don’t have to worry about constantly adding new items to your documentation. I am at a point now that I primarily use my documentation as a reference and don’t find myself having to create or update things nearly as often as I used to. The other main benefit of having solid documentation is to help others on your team, especially junior members so they can go out and learn about certain intricacies or specific details of systems in your environment. It’s also good for avoiding situations like the hit by a bus scenarios. The general idea is that by having a great piece of documentation, somebody can pick up where you left of with minimal downtime and impact. Hence, if you were to get hit by a bus and/or become incapacitated then somebody else could get up to speed with the way things work (what, when, why, how) using your documentation as a reference without having to spend hours and hours of time to figure out what exact information that they needed to know.
Documentation Overview
There are really two main types of internal documentation, internal and external. Public facing (external) documentation is an entirely separate topic and I will not be discussing it here. Network and technical documentation (internal) are the two big ones that I use to reference typically. Network documentation is essentially working with Visio to produce anything and everything related to the network. If you want to be a good system/network administrator a great way to help is to get good at working with Visio. The other type of documentation I like to refer to as “technical documentation” is typically anything and everything that I have found to be helpful in solving a problem. As I mentioned earlier, the longer your work on your environment and building your documentation the less you will have to produce new documentation. This information comes from the documentation section in Limoncelli’s book Time Management for System Administrators which is a fantastic book for sysadmins if you haven’t already taken a look.
My own technical documentation has grown organically in the 3 years, from just a few pieces of information to essentially a living library of all of the different systems I and others on my team work with on a regular basis. Everything I do is in there; from technical procedures, links to external blogs and websites, PDF’s, to pretty much anything else you can imagine that would be useful. One of the hardest things about working on documentation in a collaborative environment is getting people to buy in to the system. The best way I have found to handle this is to take the initiative, start the process and then let everyone know about it what tools you are using and how awesome this documentation is and hope that it sticks. I am not a fan of forcing people to do things but by showing them an amazingly simple system that is also incredibly useful the goal is to get them excited and motivated to use your documentation system. Sometimes it works, sometimes not, but I have found it to be the best approach when attempting to change people’s habits and get them used to working with something new.
In the remainder of this post I will go into some of the details of how I organize my technical documentation and use it to help quickly identify and fix problems quickly. I don’t plan on covering network documentation in this post as it is a slightly different beast.
The Tool
I have used pretty much every documentation system you can think of, from SharePoint, EverNote, Dropbox, various Wiki’s, Office documents, etc. and through my time and experience with all of these various tools I have found one to be the best for syadmin documentation. I really believe that many of these tools serve a certain purpose or function but in my time as a sysadmin I have found OneNote to be the hands down best tools for system administration documentation. As a side note, Wiki’s are a great alternative if you don’t have access to OneNote but my preference lies with OneNote.
OneNote is flexible, you can set up a location on a file share inside your company and allow access by certain AD users or groups to the share to control who can see and write to and collaborate on your documentation system. It is very handy to be able to work on some notes while in a meeting on my laptop then move back to my desktop and have these notes current and in sync between my machines. It is also possible for multiple people to be in OneNote working live on different notes, which is also nice when you are working in a busy team environment. I know there are many other products that allow this so even if you don’t choose OneNote I would highly recommend a tool that offers these features.
Organization
I have OneNote grouped in a specific way to help increase my productivity. I have different notebooks for various items. I have a notebook for my technical documentation, a task log, one for projects and meetings that eventually get merged into my tech docs if they grow and the project/meeting ever materializes into anything more than just a few notes and I also have a notebook for personal work items. The personal notebook contains a few different notes for personal accomplishments, personal agenda and ideas for projects and improvement.
Within the tech docs notebook I have notes for all of the major technologies that I or members of my team work with on a regular basis. Basically, anything that we deem noteworthy gets thrown into one of the categories somewhere inside the tech doc. The structure of this document grows organically for me with a few minor pain points here and there along the way. For example, when a new solution to a problem is found and there isn’t a specific category for the fix, sometimes it is necessary to either create a new category or just use your judgement when determining the best place to store the fix. Luckily the search feature in OneNote works well so if you can’t remember where you put something then you can just search through everything to find it. Coincidentally, the larger your docs grow to be the better the search function works.
Each category has a basic general structure guideline that I have found to improve the workflow and allows things to work more smoothly. The structure applies to every page I can use it on and is a template that includes the most common and hard to find information that I use as a template. The items in each category each get their own note and are as follows:
Network Information – IP’s, DNS names, server and network roles, anything that plany a part in the network goes here.
Support/Contact – Direct software and product support numbers, vendor contacts, important email addresses, account ID’s and service agreements, licenses, contract numbers, etc.
Resources – Any links to relevant and important technical docs, deployment guides, implementation guide or admin docs that are either internally or online.
Commands – I work in a Windows environment so often times PowerShell is the go to resource. I keep a table of all the commands that I deem to be useful here for that particular category.
Useful tools – This one is optional but for some categories is a nice little reference. I use this section to help identify anything and everything that I can use as a tool to help with a particular category.
This information is the skeleton that I begin to model all of the other documentation around. From there, the rest is easy and should essentially fall into place. Most of the time for me, these other items are one off fixes for problems that I have found or have solved. Sometimes a quick link to the blog post that helped will be a good enough reference and other times a painfully detailed set of steps is necessary for documenting the procedure for the fix.
Style
The actual documentation is more of an art form than anything else that gets better with practice. Sometimes technical notes require an obscene amount of detail because of how complex they are and because the procedure only needs to take place once or twice in a year. Other notes just need a rough explanation and therefore don’t need any detail at all, it all depends on the situation. As has been pointed out in the comments section, one good approach to documentation sometimes is to assume the reader has no prior knowledge about the topic and therefore painful explanation and detail are necessary to convey your materials. The point is, there is no cookie cutter way to do all of your technical documentation and various tactics and techniques need to be developed for different types of issues, that is why documentation becomes an art form in my view.
Going back to my own technique, one of the most helpful tricks I have found is to create the commands section, which I throw all of the most useful and common commands into for the particular category I am referencing. The more work you do from the command line, the more useful this note will be to you. For example, being in a Windows environment, I work with Exchange pretty much daily and having a repository of all the useful commands that I need in one place is one of the best ways to save time rather then going through Google looking for the specific information I am looking for every time. Here is a glimpse of what I am talking about.
Another thing that will help your documentation immensely is consistency. I am talking broadly here but when you create a living document you will want to have some sort of consistent style across your documentation. Items that come to mind here are things like having a consistent template as mentioned earlier, consistent naming conventions, fonts, naming conventions, established criteria for creating topics and notes, capitalization, fonts and organization, etc. Having this general style guide established early on will help to make finding and reading information in your documents much easier and will consequently help to save time when you are looking for specific information.
Closing
That’s all I’ve got for now. I just wanted to point out a few things and get people pointed in the right direction that are looking for ideas of how to get going on technical documentation. As I mentioned, there’s more than one way to skin a cat – meaning there are a large number of ways to do documentation and there isn’t necessarily one best way to do things. I have shown you my preferred way, but it may not be the best way for you.
Documentation is as much of a learning process as anything else and sometimes you just need to experiment with things and just spend some time wading around to get the best results. One great way to check if your documentation is working or not is to get somebody from your team to attempt to use your documentation to fix something you have instructions for. If you just want to get some practice writing clear and concise instructions, try this method on yourself. Write out the doc and procedure and come back to the instructions a week or month later and see how hard it is to figure out how to fix the problem. Chances are, if things are unclear or your teammate is unable to fix the issue then it is probably a good idea to take a look at reevaluating how you’re doing your documentation.
If you have any thoughts, suggestions, ideas or anything else just let me know. Like I said earlier, I am obsessed with this kind of stuff and I am always looking for ways to improve my own processes and procedures. I plan on coming back to this post and update it in the future if any of my documentation practices change but for the time being, I hope you find this information useful and applicable to your own documentation processes.
From time to time, a ticket will be created in regards to System Patches failing in an SCCM environment. To fix this, there are really only two major steps:
Rename the C:\Windows\SoftwareDistribution folder to SoftwareDistribution.old (stop Windows Update service before renaming, then restart the service).
Rename C:\Windows\System32\catroot2 to catroot2.old (stop the Cryptography service before renaming, then restart the service).
After this is done, run these actions from the configuration manager:
Discovery Data Collection Cycle
Software Updates Deployment Evaluation Cycle
Software Updates Scan Cycle
The procedure above has taken care of the issue pretty reliably. If the updates still don’t install properly, you may have to download the specific updates and install them manually.
Based on a strange network problem recently I decided to put together some quick notes and a few tips on ways to improve your Wireshark experience based on my own experience with it. There are many, many more features that Wireshark has to offer, these just happen to be the most apparent ones I have found so far. Wireshark is extremely powerful and therefore extremely useful if used properly. At first it takes a while to get used to everything Wireshark has to offer but once you start to get the hang of how things work then it can be a great network troubleshooting tool. Basic knowledge of networking concepts should be assumed as well as familiarity of Wireshark for those who attempt to debug network problems using this tool.
Here is a list of some of the most common and handy features that you can utilize in Wireshark. I am not going to dive into great detail with most of these items because I honestly don’t have a ton of experience with all of them, I basically just wanted to point out the highlights.
Filtering in Wireshark is very handy.
Create custom profiles for different use cases (quickly select from bottom right hand corner).
Color filters are useful! (Right click a field in the packet trace and selelct colorized rule) The bottom left bar will tell you what variable you are looking at to make things easier when customizing.
Use Regex in wireshark using the “matches” clause to turn on regex patterns.
You can extract specific information from trace files on the command line using tshark.
Right click a packet and select “follow TCP/UDP stream” to debug a single network conversation.
Low delta times are good. If you see high deltas you should probably investigate things.
Here are some more concrete examples and a few basics of how to put these tips into practice. In Wireshark you can use either English or code like operators when filtering to help narrow down traffic and interesting networking patterns and issues. So for example, “==” and “eq” will behave in the same manner when applying filters. Other operators include <,>, !=, <=, >=, etc. Just like you would see in a typical programming language.
Use custom configuration profiles. If you look at packet traces often this will save you a tremendous amount of time if you are looking at specific types of traffic or are only interested in certain traffic patterns. For example, you spend a lot of time looking at many traces that fit the same type of criteria; by using custom profiles you can quickly adjust and modify the view in Wireshark to help quickly identify patterns and potentially issues by cycling through different, specific views. To begin creating custom profiles go to Edit -> Configuration Profiles and then either select the custom profile or create a new one to begin changing.
One handy trick is to disable TCP offload checks. If your packet captures are getting clogged up with a bunch of red and black with offload errors, this is place you should go to look first. There are a few places where this option can either be enabled or disabled. The easiest way to check these options is under Edit -> Preferences and then under the Protocols tree for UDP, TCP and IPv4 protocols. The example below shows what the options should look like for the TCP protocol. The TCP and UDP offload checks are disabled by default but the IPv4 needs to be manually unchecked. The specific option under the IPv4 protocol is labeled “Validate the IPv4 checksum if possible”, simply uncheck this and the red and black errors should disappear.
There is a capture option that allows you to resolve IP addresses to hostname, which I find can be very useful. To enable this option open up the Show capture options screen, there should be an option in there under name resolution called “Resolve network-layer names”. Simply check that box and you should have name resolution.
As mentioned in the bullets above, the “follow UDP/TCP stream” option can be extremely useful and is a very quick way to glean information. It is so useful because it is so easy to use. Simply find a traffic conversation you would like to debug and right click the packet number in the top Wireshark pane and choose the follow UDP/TCP stream option and you can get an idea of everything that happened during a particular conversation. For example, using this technique you can follow FTP transactions.
Viewing a breakdown of the packet flow and traffic patterns can be a useful tool as well when diagnosing various network issues. There is an option in Wireshark that shows in good detail the breakdown of various packets and protocols that can be used to troubelshoot the network. This option is called Protocol Hierarchy Statistics and can be found un te Statistics -> Protocol Hierarchy Statistics page.
Only look at traffic for one IP address:
ip.addr==192.168.103.104
Likewise, filter out all traffic from an IP address:
I am writing this post with the intention of giving readers (and other sysadmins) that are unfamiliar with Git a brief introduction to get up and running as quickly as possible. There are numerous guides out there but none I have found so far with the spin of getting things working quickly for system administrators. I recently went through the process of submitting some work of mine to an Open Source project and went through all of the work myself and thought I would write up a little guide to getting started, just in case anyone is interested. Because of this experience, I thought the process could be streamlined if I presented a simple set of steps and examples for other sysadmins so that they can get started and get comfortable using Git as quickly as possible. By no means am I advocating that I am a Git expert, I simply know enough to “do stuff” so take this guide for what it is, a simple introduction. Hopefully it is enough to get the ball rolling and get you hooked on revision control!
Step 1: If you haven’t already, head over to github and set yourself up with a free account. The steps are trivial, just like setting up any other account. You will need to install Git on you machine locally so that you can do everything, sudo apt-get install git in Ubuntu, or, with a little more legwork if you are using Windows.
Step 2: Once you are all set up and running you can either create a new repository or fork off of somebody else’s repo, which will essentially make a copy of their work to your user account in github. Since I was contributing to a public project I forked their repo. This is typically (for beginners at least) done through the website by browsing to the project you are interested in and then clicking the fork button at the top right.
Once you have copied or forked the project you want, you can begin working on making your own changes and updates to the project. This is where much of the initial confusion came in for me. I should make a note here; there are a few things that must be done through the website and others that should be done with the git tool you downloaded and installed earlier.
Step 3: Clone the repo down to your machine locally to begin making changes. This is done via command line with the following:
Once you have the repo cloned down to your machine, let’s create a branch which we will use to work on the specific piece of the project we are working on. There are a few more common commands that are used for working with repos. In some scenarios the public project may change or get updated while you are working on it so you might want to pull in the most recent changes to your own fork. The following commands are used to work with public repos and forks.
Fetch new changes from original repo:
git fetch upstream
Merge the fetched changes into your working files:
git merge upstream/master
Pull newest updates and changes from a for or repo:
git pull
Okay, let’s assume that our repository is all up to date and we are ready to start working on the new changes that we want to make. First, we need to create a local branch to make the changes on. The command for creating a branch is:
git checkout -b $topic
git checkout -b my_topic
Where $topic is the specific change you will be working on. Then go ahead and make your changes and/or updates with your favorite text editor. If you have multiple changes to make it may be a good idea to create a few different branches using the command above. To switch between branches use the following:
git checkout $branchname
git checkout my_topic
If you are working on a branch and either don’t want to commit the changes or the branch becomes obsolete, you can delete the branch with either of the following commands:
Step 4: Assuming you are all done with your branches and are finished making your changes the next step is to commit the changes. This will basically record a snapshot of (the state) the files. Typically for a commit you will want to do one thing at a time to keep track of things more easily. So you will make one change and then commit and then repeat if there are numerous changes that need to be made. The command is:
git commit -am "update goes in here"
Now that we have created a branch, made our changes, written our changes so that git knows about them and can keep track of them, we are ready to go ahead and merge the code that we have changed locally back up to github. Start by pushing the branch up to the github repo.
Next, merge your changes. This is done by changing to the “master” branch and then issuing a merge command on the branch that was created to make changes into. So here is what this process would look like:
alternatively if you just want to quickly push out a change to the master branch you can issue a git commit locally and then a git push to update the master branch without merging the items first.
git commit -am "updated message"
git push
I like to use this for local repositories where I just need to get things up quickly.
Step 5: At this point everything should be accurate on your own personal github page. You can double check your commit history or look at the last update time to make sure your changes worked their way into your public fork or repo. If you are contributing to a public project the final step is to issue a pull request on the github site. Once this is done, wait to hear back from a moderator of the project if your changes need to be fixed before they accept them or wait for a response saying that your changes were good and get merged to the project.
Git, to me, was a little bit confusing at first. Like I said earlier, I still only have a generic and basic understanding but I think it is important for system administrators to know how to commit code to projects and how to keep track of their own code and projects as well. Even if the code is only small scripting projects it is still beneficial to publish since github is a public site. Potential colleagues and employers can browse through your work, which increases your visibility and can even lead to job offers.
If you’re interested in learning more, I found this site to be a great resource.