Share via


Dynamic Performance Analysis: Rules and Guidance for Visual Studio Profiling Tools Users

Mark Friedman, Architect, Visual Studio Diagnostic Team, Microsoft Corporation

May 2010

The Visual Studio profiling tools gather large amounts of performance data that are then summarized and reported. This analysis, for example, aggregates call stacks that are acquired during a sampling profile into call trees, which are grouped by module and function. One of the hazards associated with digesting the large amounts of very detailed and specific performance data that profiling runs generate is that a large body of knowledge is required to interpret it properly. Depending on the application, it might be necessary to understand the performance of the underlying hardware configurations, various operating system services, the .NET Framework and its Common Language Runtime, the database management system (DBMS), the networking protocols, and, of course, the structure and architecture of the application under investigation. With all this to consider, newcomers to the performance tools in Visual Studio will especially welcome any help and assistance in interpreting the performance data that is gathered and analyzed.

Applies To

Visual Studio 2010

How the Rules Engine Works

Rule Definitions

Rules that analyze Windows Performance Counters

Effective use of the .NET Framework

Tools Guidance

Resource Monitoring

This white paper introduces a feature of the Visual Studio profiling tools called the Performance Rules that can provide useful help and guidance for interpreting the profiling data. During the Analysis phase of the profiling run, the performance information that was gathered is subjected to processing by simple, declarative Performance Rules that are designed to help interpret the measurements that were gathered. The Performance Rules can help you understand key aspects of your application’s performance.

How the Rules Engine Works

During execution of an application that is being profiled, different kinds of measurements can be gathered. When you configure a profiling session, you choose one of the following primary collection methods:

  • Sampling. Sample execution call stacks of the application are gathered.

  • Instrumentation. Instrumentation is added to each method called by the application to measure how long it executes.

  • .NET Memory. Instrumentation is added to track the memory footprint and lifetime of each instance of a .NET Framework class.

  • Contention. Instrumentation is added to measure every method that utilizes serialization and locking to determine the duration of delays due to lock contention.

These primary instrumentation and collection methods of the Visual Studio 2010 Profiler are mutually exclusive. Only one primary collection method can be specified for any single profiling run. See the Help topic How to: Choose Collection Methods for more information on the collection methods that are available.

In addition, the Profiler can also be set up to gather secondary measurements, including data tier interaction data for multi-tiered applications, Windows performance counter data, and processor hardware counter data. These secondary data are collected in parallel with the primary collection method during the profiling run and also cease when the profiling run terminates.

When the data gathering phase completes, the analysis phase of the profiling run is initiated. During the analysis phase, the raw data gathered during the profiling run is summarized. For example, call stacks that are gathered during the profiling run are aggregated into call trees that show the overall usage of the various execution paths taken through the application’s code during the profiling run.

Following this summarization operation, the Rules Engine is invoked during the last stage of the Analysis phase. The Rules Engine steps through each defined Rule one at a time. The Rules Engine has a component that accesses the profiling data that each Rule examines. In addition, for Windows performance counters, the Rules Engine also provides simple statistical functions since the raw Windows performance counter data is not otherwise summarized during Analysis.

When any given Rule is triggered during Analysis, a message is written to Visual Studio’s Error Output window. The message reports the threshold value that triggered the Rule and also provides a brief guidance message. Figure 1 illustrates the guidance messages issued by the Rules Engine during analysis of a concurrency profile.

Figure 1: Warning and Information Messages Generated By the Rules Engine in the Error List Window.

Ff678495.DynamicPerformanceAnalysis_Fig1(en-us,VS.100).png

This example shows two Warnings and one Information message from Rules that were triggered during processing. In addition, the Rules Engine also generates Information messages from process utilization measurements that are derived from standard Windows performance counters. These specific Information messages are generated by a handful of Rules that always fire when a minimum number of samples have been collected.

To view a more detailed explanation of the Rule that fired, position the cursor over the guidance message in the Visual Studio Error Output window and then press the Help key to navigate to the MSDN Help for Visual Studio.

Categories of Performance Rules

The Rules engine potentially subjects all the measurements taken during profiling runs to specific threshold tests that are used to trigger a Rule. Essentially, each Rule is designed to detect a meaningful pattern in the profiling data gathered or identify a common anti-pattern to be avoided.

There are several different kinds of Rules that are examined during Analysis. For convenience, Rules are organized into categories. The set of Rule categories in Visual Studio 2010 includes:

  • Using the .NET Framework effectively

  • Using Memory effectively in the .NET Framework application

  • Using other hardware resources (the central processing unit, virtual memory, the network, etc.) effectively

  • Using the Visual Studio profiling tools effectively

Some of the Rules look for specific trouble spots in your application’s usage of the .NET Framework. Other Rules examine key Windows performance counters that are gathered automatically during profiling runs to augment the measurements that the Profiler takes directly. Both of these types of Rules attempt to spot known anti-patterns that have a history of impacting the performance of many programs and applications.

Often, Windows performance counter measurements are meaningful only when they are interpreted in context. Several Windows counter Rules compute ratios that relate measurements of one Windows counter to another over the length of the profiling run. Once they are calculated, these ratios are subjected to similar threshold tests.

Another type of rule gathers and reports on key Windows performance counters that are important indicators of resource utilization by your application process. For any given application or test case, it is usually important for you to understand how the application utilizes the processor, memory, the disk, the network, and other shared computer resources. The Rules engine automatically gathers the Windows performance counters needed for this resource monitoring. In general, there are no fixed or predetermined rules associated with these resource usage indicators. Key indicators of processor and memory usage by the process being profiled are provided for information only. Threshold values for these rules are set to a value of 0 so they always fire when a minimum number of samples have been collected. These rules generate Information messages that are always available in the Visual Studio Error output window, unless you decide to disable these specific performance rules.

Whether or not specific guidance Rules based on Windows performance counter measurements fire, you should be alert to any significant change in the application’s pattern of resource usage, relative to the past history of your application and its current resource usage profile. It could be very significant, for example, when the resource profile of your application changes sharply, possibly associated with changes in the application code or based on which test scenario you are exercising during the profiling run.

A final type of Rule is designed to offer guidance about using the Visual Studio profiling tools effectively. Each type of profiling run gathers performance data that is most useful to solve particular types of problems. One of the Tool Guidance Rules will fire, for example, if you are running one type of profile and there are indications that running another type profile might also be useful.

Rule Definitions

A file named VSPerf_Rule_Definitions.xml, located in the \Team Tools\Performance Tools sub-directory of the Microsoft Visual Studio 10.0 root directory, contains the default set of rules processed by the Profiler Rules Engine. Note that this set of Rules definitions is not designed to be edited or modified.

Let’s look at one of the rules that is included with Visual Studio 2010 to illustrate how the performance rules are defined.

Example 1

<Rule>
    <ID>7</ID>
    <Title>Avoid using exceptions for routine control flow.</Title>
    <Category>.NET Framework</Category>
    <ContextView>Marks</ContextView>
    <GuidanceMessage>A high number of exceptions are consistently being thrown. Consider
reducing the use of exceptions in program logic.</GuidanceMessage>
    <Condition xsi:type="WinCounterCondition">
      <WinCounterName>\.NET CLR Exceptions(@Instance)\# of Exceps Thrown / sec</WinCounterName>
      <Threshold>10</Threshold>
      <AggregationType>Average</AggregationType>
      <IsProcessSpecific>true</IsProcessSpecific>
      <MinimumNumberOfValues>25</MinimumNumberOfValues>
    </Condition>
    <HelpKeyword>vs.performance.7</HelpKeyword>
    <Action>Warning</Action>
  </Rule>

A performance rule is defined using the <Rule></Rule> keyword in an xml format data file. Each Rule is identified by a unique ID, a Title that displays in the Visual Studio Tools, Options menu, and a GuidanceMessage to display when the rule fires during the Analysis stage.

The Rule Condition clause

When the Rules Engine processes a rule, it evaluates the rule’s Condition clause. The Rule Condition is the core of any performance rule definition. For the Rule to fire, its Condition must evaluate to "True."

Each Condition clause is identified by one of the types shown in Table 1, which also defines the source of the measurement data that the Rule evaluates.

Table 1: Rule Condition Types

Rule Condition Type

Definition

WinCounterCondition

Tests the value of a single Windows Performance Counter. For example, the Rule can test the value of the \Processor(_Total)\% Processor Time counter.

WinCounterRatioCondition

Tests the ratio of two Windows Performance Counters. For example, the Rule can test the ratio of \Processor(_Total)\% Privileged Time and \Processor(_Total)\% User Time.

FunctionThresholdPercentCondition

Tests the value of a specified function during the Analysis phase of profiling following call stack aggregation. Usually, either the value of InclusiveApplicationTimePercent or ExclusiveApplicationTimePercent for the function is tested.

ProfilerSpecificPropertyCondition

Tests the value of a profile-specific Property value, often related to an error condition that was detected during the profiling run.

VspPropertyThresholdCondition

Tests the value of a profile-specific Property value that was calculated during aggregation. For example, the Rule can test the TotalSamples property to estimate the sampling error in the measurements gathered.

VspPropertyRatioCondition

Tests the ratio of two profile-specific Property values that were calculated during aggregation. For example, the Rule can test the ratio of the NumKernelSamples property to the TotalSamples property.

VspPropertyBoolCondition

Tests any of the Properties associated with the profiling. Some of these Properties are used to identify the type of profiling run: IsSampling, IsInstrumentation, IsAllocation, IsLifetime, IsConcurrency, IsResourceContention, or IsTip.

CompositeCondition

Composite conditions contain two nested child conditions that are evaluated independently, and a Boolean operator (either And or Or) that is used to join them logically. Composite conditions permit a single Rule to test for multiple conditions.

Composite conditions allow development of complex processing Rules. A child condition of a CompositeCondition can itself be a CompositeCondition, which allows condition statements to be nested to any arbitrary depth. Example 2 illustrates the use of nested CompositeCondition clauses to craft a Tool Guidance Rule that detects a sufficient level of activity inside ADO.NET and LINQ to SQL functions to suggest that gathering Tier Interaction measurements to augment the primary profiling collection data is worth doing.

Example 2: Compositecondition Clauses Permit Nesting Of Rule Condition Clauses To Any Arbitrary Depth

<Rule>
    <ID>30</ID>
    <Title>Gather Tier Interaction measurements for database projects.</Title>
    <Category>Tool Guidance</Category>
    <ContextView>FunctionDetails</ContextView>
    <GuidanceMessage>Gathering interaction measurements for multi-tiered applications will help you understand database usage patterns and key data access delays. Try profiling the application again with the Tier Interaction Profiling option enabled.</GuidanceMessage>
    <Condition xsi:type="CompositeCondition">
      <CondType>And</CondType>
      <Condition1 xsi:type="VspPropertyBoolCondition">
        <PropertyName>IsTip</PropertyName>
        <Invert>true</Invert>
      </Condition1>
      <Condition2 xsi:type="CompositeCondition">
        <CondType>Or</CondType>
        <Condition1 xsi:type="CompositeCondition">
          <CondType>Or</CondType>
          <Condition1 xsi:type="FunctionThresholdPercentCondition">
            <Threshold>2</Threshold>
            <ModuleName>System.Data.dll</ModuleName>
            <FunctionName>System.Data.*</FunctionName>
            <ColumnName>InclSamplesPercent</ColumnName>
            <ComplexRegex>false</ComplexRegex>
          </Condition1>
          <Condition2 xsi:type="FunctionThresholdPercentCondition">
            <Threshold>2</Threshold>
            <ModuleName>System.Data.ni.dll</ModuleName>
            <FunctionName>System.Data.*</FunctionName>
            <ColumnName>InclSamplesPercent</ColumnName>
            <ComplexRegex>false</ComplexRegex>
          </Condition2>
        </Condition1>
        <Condition2 xsi:type="FunctionThresholdPercentCondition">
          <Threshold>2</Threshold>
          <ModuleName>System.Data.Linq.ni.dll</ModuleName>
          <FunctionName>System.Data.*</FunctionName>
          <ColumnName>InclSamplesPercent</ColumnName>
          <ComplexRegex>false</ComplexRegex>
        </Condition2>
      </Condition2>
    </Condition>
    <HelpKeyword>vs.performance.30</HelpKeyword>
    <Action>Information</Action>
  </Rule>

Rules that Analyze Windows Performance Counters

Many performance investigations are open-ended. Someone may have detected that a new version of the application in development is running 60% slower than the earlier version. You may want to use the Visual Studio Profiler to figure out why it is slower and, perhaps, even understand how to fix it. You may be called upon to investigate a performance problem in an application (or in parts of an application) that you are not very familiar with. When the performance problem being investigated is not well-defined in advance, or you are using the Profiler to look at someone else’s code for the first time, the amount of detailed performance data available in the Profiler reports can be challenging.

Under these circumstances, it is often useful to gather data from some basic Windows performance counters that monitor resource utilization during execution of the application being profiled. The Rules Engine is a facility that automatically initiates collection of a valuable set of Windows performance counters during a profiling run. For example, the Windows counter data gathered automatically includes processor and memory activity as well as specific measurements made by the .NET Framework runtime. The counter data enriches your view of an application’s resource consumption.

If you are able to use the Windows performance counters to identify a potential resource bottleneck, then you can use the Visual Studio profiling tools more effectively. Each one of the primary collection methods available in the Visual Studio 2010 Profiler is designed to help with a specific type of performance problem. (For example, a sampling profile illuminates problems associated with excessive CPU consumption.) See the Tools Guidance section of this document for a complete discussion of the affinity between the collection techniques and specific performance investigations. The high level view of the application’s resource usage profile that standard Windows performance counters provides are used to help guide your performance investigation.

Gathering Windows Counters

Rules based on Windows counters also have a data gathering function. This function ensures that any Windows performance counter referenced in a Performance Rule is automatically collected during the profiling run so that it is available during Rules Engine processing later in the Analysis phase.

The Rules definition file is accessed before starting profiling data collection. The Rules file is parsed to identify those Windows performance counters that the Rules require. During data collection, these Windows performance counters are added to any other Windows performance counters that you have decided to gather. See How to: Collect Windows Counter Data for details. Note that all Windows counters gathered are governed by the same collection interval.

You can examine the individual measurements of the Windows counters that are gathered by the Rules engine by accessing the Marks View. Note that you can copy the performance counter data from the Marks View into a Microsoft Excel spreadsheet for further analysis or charting.

During Analysis, the Rules are evaluated one at a time in sequence to determine which are fired. If the Rule specifies a Windows counter that does not exist or is currently inactive, there will be no measurement data for the Rule to analyze and it cannot fire. Investigate the source of this performance counter data collection problem, correct it, and then re-run the performance profile.

System-level performance counters

The Visual Studio Profiler will automatically gather system-level Windows performance counters like \Processor(_Total)\% Processor Time and \Memory\Pages/sec. If you are profiling an application when it is as the only application active on the machine, it is easy to associate the level of system activity you observe with the application that is running that you are profiling. However, if other applications are active during the profiling run, attributing the activity observed at the system level to an individual application process is usually quite difficult.

Process-specific counters

The most interesting Windows performance counters to evaluate from a profiling run are the ones associated with the process being profiled. These are known as process-specific counters. The process-specific Windows counters include measurements of both physical and virtual memory usage by your application. An additional set of .NET performance counters are also available at the process level for applications that rely on the Common Language Runtime (CLR) of the .NET Framework.

Rules can specify that process-specific Windows counters of the general form \ObjectName(@Instance)\counterName be gathered. When the Rules are scanned during profiling data collection initialization, the "@Instance" tag is replaced by the name of the process being profiled. An example of a Rule that gathers process-specific counters is shown in Example 3.

Example 3: A Rule That Gathers Process-Specific Counters

<Rule>
    <ID>7</ID>
    <Title>Avoid using exceptions for routine control flow.</Title>
    <Category>.NET Framework</Category>
    <ContextView>Marks</ContextView>
    <GuidanceMessage>A high number of exceptions are consistently being thrown. Consider reducing the use of exceptions in program logic.</GuidanceMessage>
    <Condition xsi:type="WinCounterCondition">
      <WinCounterName>\.NET CLR Exceptions(@Instance)\# of Exceps Thrown / sec</WinCounterName>
      <Threshold>10</Threshold>
      <AggregationType>Average</AggregationType>
      <IsProcessSpecific>true</IsProcessSpecific>
      <MinimumNumberOfValues>25</MinimumNumberOfValues>
    </Condition>
    <HelpKeyword>vs.performance.7</HelpKeyword>
    <Action>Warning</Action>
  </Rule>

The standard set of Rules the come with Visual Studio 2010 gather several process-specific counters, including processor utilization at the process level. Note that sampling profiles and the Concurrency profile that helps you visualize the behavior of a multi-threaded application calculate usage more accurately than the related Windows performance counters. However, the process-specific Windows performance counters break out processor utilization into time spent in User and Privileged modes of execution, which can also be useful to understand.

Effective use of the .NET Framework

One of the problems .NET developers face is they rely on enormous libraries of other people’s code whose performance characteristics may not be known or readily understood. The Visual Studio Profiler is the primary tool to use to help you understand your application’s usage of the .NET Framework, along with any other components and classes in third-party frameworks that the application references.

The .NET Framework has a myriad of classes and methods to choose from, some of which perform semantically equivalent functions but may exhibit sharply different performance behavior. For instance, there are a variety of Collection classes, each with very specific performance characteristics, primarily based on the following:

  • The size of the collection

    -and-

  • The methods that are most commonly used to access and manipulate it.

There is no one Collection class that is an unqualified best choice for every situation. Choosing the right Collection class for your application does require understanding these performance trade-offs. If the collection is small and infrequently accessed, the application’s performance might not suffer too much if the wrong collection class was implemented. Alternatively, if the collection is large and frequently and manipulated, the choice of collection can be crucial to application performance and scalability. Remember that performance issues of this type often do not arise during simple unit testing. The performance and scalability issues may not be evident until you execute a load test or a specific unit test designed to simulate stress conditions. The Code Analysis feature of Visual Studio is a related tool that also analyzes your .NET Framework programming choices. Code Analysis contains many useful Rules for avoiding known performance anti-patterns in your .NET Framework code. Prior to running the Visual Studio Profiler, running Code Analysis against your source code is highly recommended. Because the features overlap to some extent, it is worth taking a moment to compare and contrast the two approaches. For more information, see Performance Warnings.

When it comes to performance and scalability, many of the choices you make among .NET Framework facilities are neither definitively bad nor good. With performance issues such as those associated with scalability, it is often necessary to choose between alternatives that have complex trade-offs. In these cases, the static code analysis approach is not always very effective. Using an empirical approach that evaluates the measurements taken of the application while it is running, the Visual Studio profiler tools help you identify the specific patterns your application uses that are most detrimental to performance and scalability.

Dynamic vs. static analysis

The Code Analysis feature uses static code analysis that examines your application code during compilation, looking for known anti- patterns that can cause performance problems. In contrast, the Profiler’s Performance Rules engine analyzes the measurement data gathered during the execution of your application, also looking for known anti-patterns. The static analysis tools can tell you that your application contains code with known performance penalties. Although it is always a good practice to fix performance violations that the static analysis tools identify, fixing these violations may not have a big impact on the overall performance of your application. For example, your application may implement a recognized anti-pattern or practice, but it might make only limited use of the problematic code.

In contrast to static code analysis, a .NET Framework guidance rule in the Profiler fires based on measuring the application during its execution. The rule only fires when it identifies a specific anti-pattern that was observed having a significant impact on the application’s performance. Rules that fire inform you that your application may be making excessive use of a specific .NET Framework method or feature, highlighting ones that tend to be very costly. The guidance here supplements and extends the practical advice on performance that is contained in the volume in the Microsoft Patterns and Practices library entitled Improving .NET Application Performance and Scalability. This reference contains an extensive and authoritative set of recommendations that .NET application developers should follow in order to develop programs with well-understood performance characteristics.

Many of the Rules are triggered by a threshold test of the percentage of inclusive samples attributed to a specific .NET Framework Module or Function when the samples are compared to the total number of instruction execution samples of the entire application process. Inclusive samples represents the number of times a call stack sample was taken where code from the relevant Module or Function was referenced in the call stack. At the level of a Module or Function, inclusive samples indicates overall usage of that module or function.

Inclusive Sampling Rules fire using a threshold test that is calculated based on the percentage of inclusive samples reported for a .NET Framework Module or Function compared to the overall number of call samples that are collected. Even if the overall percentage of inclusive samples is high enough to trigger the rule, make sure you understand whether the profile has accumulated enough samples of execution time to be significant. Also, consider whether the application’s use of the .NET Framework anti-pattern is justified by other considerations, such as maintainability, security, or adherence to other project coding standards.

A critical consideration in using the Visual Studio profiling tools effectively is ensuring that the measurements taken when the application is running are representative of how the application is expected to perform in production. Your choices about which test cases to execute and measure are very important. You want to profile your application to see that it performs well under the conditions that you expect it face in the real world.

For more information on the specific Performance Rules that offer .NET Framework usage guidance, see the MSDN Help topic entitled .NET Framework Usage Performance Rules. Before you take action to correct the condition that triggered one of these rules, make sure that the application’s performance is impacted to the extent that making changes to the application’s logic is warranted.

Tools Guidance

The Visual Studio Premium Profiling Tools provide four primary measurement collection methods, including:

  • Sampling

  • Instrumentation

  • .NET memory

  • Concurrency

Each of these profiling methods is a probe that gathers detailed and specific measurement data about your application during a profiling run. Due to overhead considerations, only one primary profiling method can be used to gather information about an application process at a time.

The Visual Studio Profiling Tools also has a collection method that instruments and measures the amount of time .NET Framework applications spend waiting for ADO.NET database calls to complete. This feature is called Tier Interaction Profiling (TIP) and can be used in conjunction with any of the other primary profiling collection methods. TIP specifically instruments the calls your application makes to ADO.NET APIs. Because TIP instrumentation is in addition to the measurements of the individual profiling methods, TIP must be specifically enabled in addition to choosing a profiling method. See the MSDN Help topic Understanding Profiling Methods for more information about each of these profiling methods.

During a profiling run, the Visual Studio Profiling Tools measure the application under investigation, attributing CPU usage, memory usage, lock contention, or execution time to specific modules and functions that are present within the application process address space. The Profiler reports allow you to look deep inside the application’s performance to understand how the program’s execution logic accounts for this particular pattern of resource usage.

Selecting the right profiling tool is essential to resolving most performance problems. To use the Visual Studio profiling tools effectively, you need to select a tool that is a good match to the performance problem you are investigating. For example, if your application’s performance or scalability is mainly impacted by a virtual or physical memory shortage, when you profile the application using sampling you are not likely to gain much additional insight into this condition. Note that while a sampling profile is the suggested default, this is mainly because sampling data collection is fast, easy, and relatively low impact, compared to the other profiling methods. A sampling profile is almost always the best tool to investigate a problem related to excess processor utilization. On the other hand, it is not the best tool to use to resolve a performance problem in a multi-threaded application that is experiencing severe delays due to lock contention, for example.

The Performance Rules feature contains a number of rules to help you match the performance tools that are available to your application’s performance profile. When one of the Tools Guidance rules fires, it will usually suggest making another profiling run and either selecting one of the other profiling tools or changing some of the profiling options to gain more insight into the problem. For example, there is a Tools Guidance rule that will fire during analysis of a sampling profile that detects frequent calls to ADO.NET and will suggest you enable Tiered Interaction profiling if it is not already active.

For more information on the specific Performance Rules that offer guidance on using the profiling tools, see the MSDN Help entitled Profiling Tools Usage Rules.

Resource Monitoring

For any given application or test case, it is usually important for you to understand the pattern of how your application utilizes the processor, memory, the disk, and the network. The Resource Monitoring rules gather and report on Windows performance counters that are key indicators of resource utilization by your application process. The Rules Engine automatically gathers the Windows performance counters that measure utilization of important resources at the process level.

The Resource Monitoring rules gather data on:

  • Processor utilization

  • Process virtual memory usage

  • Both system–wide paging activity and process physical memory usage

In general, there are no fixed or predetermined thresholds associated with any of these resource usage indicators. What is an acceptable level of processor usage, for example, depends, in part, on what the application is trying to accomplish and, in part, what kinds of processing resources are or will be present when the application is deployed.

Key indicators of processor and memory usage by the process being profiled are provided for information only. Threshold values for these rules are set to a value of 0 so they always fire when a minimum number of samples have been collected. These rules generate Information messages that are always available in the Visual Studio Error output window, unless you decide to disable these specific performance rules.

Whether or not specific guidance Rules based on Windows performance counter measurements fire, you should be alert to any significant change in the application’s pattern of resource usage, relative to the past history of your application and its current resource usage profile. Abrupt changes in the resource profile of your application can be significant, possibly associated with changes in the application code or based on which test scenario you are exercising during the profiling run, for example.

This document contains information to help you use the Rules feature effectively. It is based on the Visual Studio 2010 release of the profiling tools and the set of Performance Rules that are included with it. It also discusses the way Rules are processed by the Rules Engine during a profiling run to allow you to understand better what it means when one of the Rules is triggered.

See Also

Concepts

Technical Articles for Visual Studio Application Lifecycle Management

Visual Studio Application Lifecycle Management