NMTopProtocols Expert Released

Michael A. Hawker is the Program Manager for Network Monitor. His focus has been on the API, UI, and Experts as they have been developed through versions 3.3 and 3.4.

You’ve seen a lot of updates lately on Experts with the move to the Outercurve Foundation, but we have a new expert for you too! It’s the brethren of Top Users: Top Protocols. Download it here over on CodePlex.

What is Top Protocols?

Top Protocols is another simple expert designed to give you a high-level summary of what’s occurring in a trace. While Top Users shows you the chattiest boxes on a network, likewise Top Protocols shows you the chattiest Protocols.

Once installed, it works like any other Expert in the Experts menu. Just find NMTopProtocols in the list and select Run Expert. The data will automatically start being parsed as indicated in the status bar:

clip_image002

What can it do?

Top Protocols walks through your trace and constructs a count of the highest-level protocols it encounters. It counts the number of times it sees each one, the structure for how that protocol was found in the stack and the number of bytes each frame of information contained. It can also reassemble the data as it goes so fragments get counted as the initiating protocol instead of just raw transport data.

One important note here is that Top Protocols only parses as far as it can using the selected Parsing Profile from Network Monitor. So, if you want the full breadth of parsers, you’ll want to make sure to be on the Windows profile first. If you just want a quick summary, change your profile first, and Top Protocols will load a little quicker.

After it’s collected this information, it displays it in three main views in the tabbed interface:

clip_image003

Note: you’ll need the MS Chart Controls pointed to in the download page in order to use the Pie Chart and Time Graph modes.

Overview

clip_image005

The overview is a basic table of the raw data collected. By default it’s sorted by the breakdown of the Protocol tree (which you can also see on the left), but you can use the columns at the top to sort by the different data points such as number of frames or number of bytes.

If after you’ve done some sorting you want to go back to the initial view, you can find the Restore Default Ordering option in the context menu of the grid:

clip_image006

If you select a node in the tree on the left, your view will be filtered to see only that protocol and its children:

clip_image008

Pie Chart View

The Pie Chart view is a recent addendum to the tool and is pretty simple at the moment. It gives you a quick and dirty way to visually see the most prevalent protocol in the trace:

clip_image010

Here we can see that HTTP traffic is the primary component to this trace.

This view also filters out the noise and only displays those elements which are end nodes of the tree.

Time Graph View

This view is the most exciting of all in Top Protocols. It’s almost like another expert in itself, and you get it for free!

clip_image012

The Time Graph view shows how the protocol traffic was received over the course of time the trace was taken. This lets you see spikes in protocol traffic to more easily determine when data was sent or received.

By default this mode filters out the protocols that don’t meet a certain threshold. This threshold can be configured in the options menu, as well as the scale of the graph.

Helpful Tricks

There are a lot of settings in Top Protocols, which can be found under the options menu.

clip_image013

The first option simply tells Top Protocols whether or not to reassemble the data first, thus any fragments encountered later are counted towards the tally of the initial protocol. Therefore, if you had a HTTP payload spanning multiple packets, each one would tally towards the total of HTTP traffic encountered. With this option off, you’d see only HTTP counts for the headers and the rest would be lumped under TCP. This could mask the intensity of certain types of traffic when turned off, but the comparison can tell you how fragmented your data is and if it was more requests with smaller payloads or less requests with larger payloads. A work item for the future could be to calculate both of these values at the same time and present that data as well.

The next set of options refers to the Time Graph and how it is calculated. The first option changes the number of intervals used to segment the data. The more slices used the greater the resolution to see changes but the more memory which will be required. You can also determine how much data needs to occur for a protocol to automatically be selected in the graph.

Use the “Show Tree as Hierarchy” option to decide if the data is grouped under their carrier protocols or not. And use the following three options to determine whether or not to use certain filters available from Network Monitor.

And lastly, you can decide how all these settings are persisted.

One thing to note with all these settings though, is that they’ll only take effect the next time a file is loaded. However, you can quickly reload your current view using the ‘Reload File’ option in the File menu:

clip_image014

Why’s this version 3.2 and what’s next?

Top Protocols has been around for a while, but started out like most of our experts as an internal project. Since then, we’ve started pushing Experts into the community and now as part of the Outercurve foundation. This makes it a lot easier to work on these projects for everyone’s benefit as Paul’s explained before.

Originally Top Protocols was written by Paul and ran on the command prompt. Michael took the project when the Experts feature was introduced and re-wrote the project to include more features and the GUI. It’s been through a couple of revisions since, but that’s why it’s already up to v3.2 when it was transferred to Outercurve and made available to everyone.

As for what’s next, I recommend you check out the Top Protocols homepage. We welcome people to submit issues and ideas and if you’re adventurous enough to hop on board in helping out. And we’ll see where we go from there together.