August 2010

Volume 25 Number 08

Speed SQL - Tuning Your Database Calls with Tier Interaction Profiling

By Mark Friedman | August 2010

Many applications are designed explicitly to use multiple tiers, where the performance of calls into the data access tier is crucial to the overall responsiveness of the application. The use of multiple tiers increases the flexibility of the application. The n-tiered approach also fosters isolation of key components, which can be used to improve both reliability and scalability. The use of multiple tiers promotes scalability because components that are isolated into distinct tiers can then more readily be distributed across available computing resources.

Tier interaction profiling (TIP) is designed to help you understand the performance of the data layer on which the application relies. TIP is a new feature of the Visual Studio profiling tools that measures and reports the duration of data tier delays .NET Framework applications experience when waiting for synchronous calls to ADO.NET-compliant databases to complete. For response-time-oriented applications that make frequent calls to databases, TIP helps you to understand which data requests contribute the most to response delays.

In this article, I’ll introduce you to TIP and demonstrate its reporting capabilities. I’ll also discuss the instrumentation technology that TIP relies on and provide some best practices for using TIP effectively to diagnose performance problems related to database activity. I will walk you through using TIP on a sample two-tiered ASP.NET Web application that is data-intensive and that accesses data from Microsoft SQL Server using the LINQ to SQL technology. Finally, I will discuss how you can augment the TIP performance data by using standard SQL Administrator performance tools to gain an even deeper understanding of the performance of a data tier.

Getting Started with TIP

TIP dynamically adds instrumentation code to measure the duration of calls to the data layer of your application during a profiling run. The Visual Studio profiling tools are available in Visual Studio 2010 Premium Edition and Visual Studio 2010 Ultimate Edition.

To begin a profiling session, you can either click Launch Performance Wizard from the Analyze menu, click Start Performance Analysis from the Debug menu, or use the Alt+F2 keyboard shortcut. The first page of the Performance Wizard then asks you to select a profiling method.

TIP can be used with any profiling method (sampling, instrumentation, memory, or concurrency), but it isn’t enabled by default. To enable TIP, you will need to uncheck the “Launch profiling after the wizard finishes” checkbox on page 3 of the Performance Wizard. (Because TIP isn’t turned on yet, you’re not ready to launch your application and begin profiling.) For best results, I recommend selecting the sampling method of profiling initially, especially if what you care about most is the data tier interaction data. This is mainly because sampling has least impact on the performance of your application.

To enable TIP data collection, access the Performance Session in the Performance Wizard just created in the Performance Explorer window and right-click to see its properties. In the properties dialog, select the Tier Interactions tab, then check the "Enable tier interaction profiling" checkbox. Click the OK button to dismiss the dialog. Now that TIP is enabled, you’re ready to start profiling your application.

To start the actual profiling run, click the Launch with Profiling toolbar button in the Performance Explorer window.

For complete instructions on enabling TIP in Visual Studio, see Habib Heydarian’s blog post “Walkthrough: Using the Tier Interaction Profiler in Visual Studio Team System 2010” at https://blogs.msdn.com/b/habibh/archive/2009/06/30/walkthrough-using-the-tier-interaction-profiler-in-visual-studio-team-system-2010.aspx.

How TIP Measures Performance

TIP inserts code into your application during a profiling run that measures calls to the ADO.NET data layer your application uses. When tier interaction profiling is active, the Visual Studio Profiler inspects the Microsoft intermediate language (MSIL) in your solution, looking for references to ADO.NET functions. Prior to calling the just-in-time (JIT) compiler to generate the native code run by the application, the profiler inserts instructions that add instrumentation to key ADO.NET methods. This instrumentation code keeps track of the amount of time spent during each ADO.NET call.

As the application executes, TIP captures and records timing data. When your application is being profiled, this instrumentation records the duration of any ADO.NET calls that are made and also captures a copy of the command text used in the database call. As your application runs, timing data is gathered for all database accesses performed using synchronous methods of the ADO.NET classes, including SQL, OLE DB, Open Database Connectivity (ODBC), and SQL Server Compact (SQL CE) APIs. TIP also captures timing data if your application uses LINQ to SQL or the Entity Framework APIs to access SQL databases.

The timing data is stored along with any other data gathered during the profiling session in a Visual Studio profiler file (.vsp). Because an application making a call to an external database performs an out-of-process call, the instructions that TIP adds to instrument the application have very little impact on overall application performance.

Tuning for Performance

A common design pattern involves dividing a Web application into a presentation layer, a business logic layer and a data layer. This design paradigm leads to breaking your application into component parts to foster reliability, extensibility and scalability. The multi-tiered application accesses its data layer using business logic components that reference data entities logically as rows and columns in a related set of tables. How a database like SQL Server maintains the physical data associated with the database tables is transparent to your application by design.

For the sake of reliability and scalability, large-scale Web applications often configure multiple machines into pools that are responsible for the processing logic associated with each layer of the application. Multiple machines supporting multiple tiers creates a special challenge in performance analysis because monitoring any single machine provides an incomplete picture of the application.

For example, using a database system like SQL Server to manage and mediate access to the application’s data store creates a data layer that is isolated from the application logic. The application data housed in a database like SQL Server is maintained in a separate process address space with its own data store. The database containing the application data can reside on the same physical machine as the application, but is more likely to be accessed from a different machine using networking protocols.

SQL commands that the application passes to an external database and that are operated upon there are out-of-process calls. Sampling profiles see that the application is dormant while waiting for these out-of-process calls to complete, but cannot tell you why the application is waiting or the duration of these delays. Instrumented and concurrency profiles do measure the duration of these delays, but cannot tell you what SQL commands are being issued or why they’re taking so long to complete.

In multi-tiered applications that communicate with an external database, the database component is often a major contributor to overall application response time. The Visual Studio profiling tools include a variety of profiling methods including sampling, instrumentation, memory allocation, and concurrency—but none of these methods may be of much help in identifying performance issues associated with accessing an external database without TIP data.

TIP data lets you drill down into database-related delays and understand why they’re occurring. In conjunction with other performance tools your database vendor can supply, you can also start to understand what can be done to improve the performance of an application that depends heavily on database performance.

Because TIP adds instrumentation to the application code, timing data associated with database commands can be gathered no matter where the database instance being accessed is physically located. Timing data can be gathered, for example, for out-of-process calls to a SQL Server instance physically resident on the same machine as the application, which is a typical scenario in unit testing. When the same application is ready for integration or load testing against a SQL Server instance located on a separate physical machine, TIP can continue to gather measurement data for that configuration. In fact, the TIP measurements enable you to compare the performance of these two different configurations.

TIP lets you compare and contrast the impact of the many external database performance and tuning options that are available, including cache memory configuration, physical data storage devices, database partitioning, database indexing and database table design. In addition, you can directly measure the performance impact of running SQL Server on a virtual machine.

TIP Report Basics

Once a profiling session with TIP activated completes, the timing data associated with any of your application’s interaction with its ADO.NET data layer is summarized in the Tier Interactions view. Figure 1 shows an example of the profiler when TIP data gathering is active and there is ADO.NET activity during the profiling run.

image: A Visual Studio Profiler Tier Interaction Report

Figure 1 A Visual Studio Profiler Tier Interaction Report

The top half of the report is a summary of the profiling data that was gathered. For ASP.NET applications, the view is organized by URL. The report groups server-side response time of Web application GET requests by URL.

Below the application layer, the report shows each connection to a database tier (which in this example was the AdventureWorks sample database). It measures and reports the portion of the server-­side processing time for ASP.NET requests that is associated with synchronous database calls using ADO.NET. In this example three summary lines appear, each tabulating the database activity associated with three different ASP.NET pages in the Web site being profiled. For each ASP.NET page identified during profiling, the number of ASP.NET requests that were processed during the profiling run and the server-side response times for each response message that was generated are reported.

Additional summary lines show response-time data for other GET requests, including requests for stylesheets, Javascript code, and images linked in the pages. Any database calls that the profiler is unable to associate with a specific ASP.NET request are grouped under an Other Requests category.

When you’re profiling a Windows desktop or console application that utilizes a data layer, the report organizes the ADO.NET activity under the process name.

Below each Web page summary line is a single summary line that reports the number of synchronous database requests, organized by database connection, that were made during ASP.NET processing. In this example, you can see that six ASP.NET requests to Customer­Query.aspx were handled over a single database connection. These six requests took a total of 0.959 seconds to process on the server, for an average response time of 160 milliseconds. These requests issued 12 SQL queries that took approximately 45 milliseconds to complete. Waiting for the database requests accounted for only about 5 percent of the elapsed time associated with generating the response messages for this Web page.

If you highlight one of the database connection summary lines, the bottom half of the Tier Interactions view breaks out the specific SQL commands that the application issued. The SQL commands are grouped by the command text issued and sorted by elapsed time within the page group.

In the example, one SQL command was issued three times, another command was issued six times, and a third query was issued three times. In the detail view, the elapsed time of each specific query that is rolled up into a single line on the summary report is reported separately. You can see the total elapsed time, the average across all instances of the command, and the minimum and maximum delays that were observed for each query.

If you double-click the SQL command detail line, the full text of the SQL command that was issued is displayed in a Database Command Text window. This is the actual command the application passed to the database across the ADO.NET interface during execution. If the request is for execution of a stored procedure, the specific call to the stored procedure is shown.

A LINQ to SQL Example

Let’s look at a simple example of using TIP to understand an ASP.NET application that depends heavily on accessing information from a database.

TIP can be especially helpful with applications that use LINQ to SQL to access data stored in an external SQL Server database because LINQ tends to remove the developer one step further from the physical database and its performance characteristics. With LINQ to SQL, the Entity:Relationship (E:R) diagrams that you create in the Object Relational Designer generate classes that are then used by Visual Studio as templates for building syntactically correct SQL commands automatically.

Because LINQ to SQL eliminates most SQL language coding considerations and is easy to use, it also tends to mask important performance considerations associated with database design, configuration and tuning. As this example illustrates, using LINQ, you can easily create a complex query that joins multiple tables without having to consider the performance implications of doing so.

With TIP, you can see the actual SQL command text that LINQ to SQL generates and gather measurements from the runtime execution of these SQL queries. With the SQL command text in hand, you can then access other database tuning tools to help you better understand the performance implications of any particular LINQ to SQL operation.

My example is a Web Form application that queries the Adventure­Works Sales.Customer tables using a specific customer ID to retrieve that customer’s order history. The AdventureWorks tables involved in this query include the Customer, SalesOrderHeader, Sales­OrderDetail and StateProvince tables, as illustrated in the Object Relational Designer view in Figure 2.

image: The AdventureWorks Tables Used to Query Sales.Customer Information

Figure 2 The AdventureWorks Tables Used to Query Sales.Customer Information

If you also wanted to display the customer mailing and e-mail address information alongside the order history, the Customer­Address, Address, and Contact tables need to be accessed. As shown in the Object Relational Designer, the AdventureWorks tables contain primary and foreign keys such as CustomerID, SalesOrder, and ContactID that allow these tables to be joined logically.

The C# code to create an AdventureWorks customer query using LINQ to SQL is shown in Figure 3. In this case custid is the specific CustomerID value requested. This query returns a customeryquery collection that contains a single data row containing the data fields listed in the select new clause.

Figure 3 LINQ to SQL Customer Query

var customerquery = 
  from customers in db.Customers
  from custaddrs in db.CustomerAddresses
  from addrs in db.Addresses
  where (customers.CustomerID == custid &&
         customers.CustomerID == custaddrs.CustomerID &&
         custaddrs.AddressID == addrs.AddressID)
  select new {
    customers.CustomerID,
    customers.CustomerType,
    addrs.AddressLine1,
    addrs.AddressLine2,
    addrs.City,
    addrs.StateProvince,
    addrs.PostalCode,
    customers.TerritoryID
  };

The customeryquery can then be bound to a control on an ASP.NET Web page:

DetailsView1.DataSource = customerquery;
DetailsView1.DataBind();

Now I can create a query to retrieve the order history of this customer:

var orderquery = 
  from orderhdr in db.SalesOrderHeaders
  where (orderhdr.CustomerID == custid)
  orderby orderhdr.OrderDate
  select new {
    Date = orderhdr.OrderDate.ToShortDateString(),
    orderhdr.AccountNumber,
    InvoiceNo = orderhdr.SalesOrderID,
    orderhdr.TotalDue
  };

When this LINQ to SQL operation executes, orderquery will contain a row corresponding to each row in the OrderHdr table associated with a specific Customer ID. The orderquery collection will contain multiple rows if the customer history indicates there were multiple sales transactions.

These queries look straightforward on the surface. But using TIP, you can start to understand the performance implications of these seemingly simple LINQ to SQL operations.

Using TIP Data for Tuning

Let’s take a closer look at customerquery. At run time, LINQ to SQL takes the logical database SELECT operations implied in the LINQ statement and uses it to generate a valid SQL command that joins data from four AdventureWorks tables: Customers, Customer­Addresses, Addresses and the static StateProvince table. You don’t see this in the LINQ to SQL code here.

When you run this code under the Visual Studio profiler, the TIP instrumentation reports the number of times this query was executed and measures the time the Web page was delayed waiting for its execution. This is, in fact, the operation that executed six times during the profiling run illustrated back in Figure 1.

In addition, as you saw earlier, the SQL command generated by the LINQ to SQL code is also available when the application is profiled. Figure 4 shows the actual SQL command for this operation.

Figure 4 SQL Command for customerquery

SELECT [t0].[CustomerID], [t0].[CustomerType], [t2].[AddressLine1], [t2].[AddressLine2], [t2].[City], [t3].[StateProvinceID], [t3].[StateProvinceCode], [t3].[CountryRegionCode], [t3].[IsOnlyStateProvinceFlag], [t3].[Name], [t3].[TerritoryID], [t3].[rowguid], [t3].[ModifiedDate], [t2].[PostalCode], [t0].[TerritoryID] AS [TerritoryID2]
FROM [Sales].[Customer] AS [t0]
CROSS JOIN [Sales].[CustomerAddress] AS [t1]
CROSS JOIN [Person].[Address] AS [t2]
INNER JOIN [Person].[StateProvince] AS [t3] ON [t3].[StateProvinceID] = [t2].[StateProvinceID]
WHERE ([t0].[CustomerID] = @p0) AND ([t0].[CustomerID] = [t1].[CustomerID]) AND ([t1].[AddressID] = [t2].[AddressID])

Note that the SQL command text includes a token (designated here as “@p0”) to represent the customer ID parameter that LINQ feeds into the query.

Now that the actual SQL command text generated by LINQ is available, it’s possible to understand how the database design is affecting the performance of the query.

One thing you can do is execute this SQL command in SQL Server Management Studio and examine its execution plan, as shown in Figure 5. To access the execution plan for this query, you need to add a command to point to the appropriate database:

USE AdventureWorks ;
GO

Next, copy the SQL command text from the TIP report, remembering to replace the “@p0” token with a valid CustomerID from the database. Then execute this example query in SQL Server Management Studio and access the execution plan that shows how the query optimizer translated the logical request into a physical plan of execution.

image: The Execution Plan for the Example LINQ to SQL Operation

Figure 5 The Execution Plan for the Example LINQ to SQL Operation

In this example, the execution plan for the query shows that the SELECT statement accessed the Customer table using a clustered index on the CustomerID field that returned exactly one row in the table. In SQL Server Management Studio you can use the mouse to hover over an operation to view its properties or highlight the operation and right-click to view the Properties window. In this fashion, you can cycle through each of the remaining operations that the command requests. Each of the three subsequent JOINs that augment the initial Customer SELECT also accesses a table using its clustered index and returns a single unique row.

From this investigation, you can see that, to process this query, a total of four rows, each from a different table in the AdventureWorks database, needs to be accessed. Each access is performed efficiently utilizing the table’s unique primary key.

Similarly, you can use TIP to see the SQL command for the orderquery code and feed it to SQL Server Management Studio to see its execution plan (see Figure 6). This query accesses a single table called OrderHdr using the CustomerID as a foreign key, so it needs to access both an ordinary nonclustered index as well as the clustered index on SalesOrderHeaderID.

image: The Execution Plan for orderquery

Figure 6 The Execution Plan for orderquery

This particular instance of the query returns nine rows. The orderby clause in the LINQ to SQL code translates into a SQL ORDER BY clause that entails an additional Sort operation on the resultset of the SELECT. This operation represents 40 percent of the overall cost of executing the request, as estimated by SQL Server Plan Optimizer.

Choosing a Profiling Context

TIP is designed to supplement existing Visual Studio profiling methods to gather specific measurements on data tier interactions. TIP is a secondary data-gathering facility and cannot be gathered without specifying a primary profiling method. TIP data can be gathered during sampling, instrumentation and concurrency profiling runs for any application that communicates with a data tier using ADO.NET.

Given that you need to choose a primary profiling method for an application for which you want to gather TIP data, which should you use? Let’s walk through some of the considerations for choosing a primary profiling method.

Are the delays associated with data tier interactions the primary focus of the performance investigation? If so, sampling profiling is the recommended primary method because it’s usually the least intrusive form of profiling.

If data tier delays aren’t the primary focus of the performance investigation, choose a profiling method based on what is most likely to provide the measurement data that is most applicable in the current context. For instance, if you’re investigating a problem associated with the concurrent execution of multiple threads, gather concurrency data. If you’re investigating a problem associated with a CPU-bound application, gather sampling data. For additional guidance on which primary collection method to choose, see the article “How to: Choose Collection Methods”.

If you’re not already familiar with the data layer code, you may need help from the primary profiling data to find the exact code where these ADO.NET calls originate. TIP does not capture a call stack when it gathers timing information on synchronous ADO.NET out-of-process calls. If you need to understand where in the application calls to ADO.NET methods are being made, instrumented profiles will be the most helpful. Sampling data can also help you do this, but not with the precision of an instrumented profile.

You can choose to gather resource-contention data along with the tier interaction measurements, but gathering contention data tends to be a higher-overhead function than sampling, and the contention data is unlikely to provide any assistance in helping to determine where specific ADO.NET calls originate. Investigations that require .NET Memory Allocation profiling, which are normally very high impact, are unlikely to benefit from gathering tier interaction measurements.

Sampling Profiles

Often, the data tier interactions themselves are the primary focus of a performance investigation. In this case, best results are usually obtained when sampling is chosen as the primary profiling method. Sampling is preferred in this case mainly because it’s the profiling method that usually affects an application’s performance the least. Sampling profiles can also prove helpful in navigating to the source code where the ADO.NET calls that most affect performance originate.

With data tier functions that run out-of-process, the instruction execution samples gathered during a sampling profile normally do not reflect any of the time that an application spends waiting for synchronous calls across an ADO.NET interface to complete. During periods when an application’s execution thread is waiting for these out-of-process calls to complete, the application thread is blocked and no execution samples are recorded against it. When sampling is used, gathering TIP data is the best way to understand which application delays occur that are due to synchronous calls to a data tier.

The instrumentation used by TIP doesn’t capture call stacks when it gathers timing data. So, if you’re profiling a tiered application and aren’t thoroughly familiar with the code, it can be difficult to determine precisely where the calls to the data layer originate. Sampling profiles may also help identify where in the application code calls to these ADO.NET interfaces are performed. If the application makes frequent calls to ADO.NET interfaces, some samples showing time spent in ADO.NET modules, including System.Data.dll and System.Data.Linq.dll, are likely to be gathered.

In reviewing the sampling data and comparing it to the tier interaction measurements, keep in mind that while an application thread is blocked waiting for a synchronous database call to complete, no sampling data for the thread is gathered. The samples accumulated during execution exclude the out-of-process delays TIP explicitly measures. What you can expect, however, is a rough correlation between the execution samples gathered in ADO.NET methods and the number of ADO.NET commands that TIP observes and measures. In such cases, sampling profiles can assist you in navigating to the source code where the ADO.NET calls that TIP measures and reports originate.

Note that if an application has SQL queries that return large resultsets that are then bound to a data-bound control on a form, you’re apt to find a significant number of execution samples in the control’s DataBind method. Seeing which DataBind methods appear in the sampling profile may also assist you in navigating to the source code where the ADO.NET calls originate.

Instrumented Profiles

When instrumented profiles are gathered, the timing data for methods recorded by the instrumentation already include any time spent waiting for out-of-process calls to complete inside the method. The timing data recorded in an instrumented profile is gathered by measuring the time at the entry and exit of each of the application’s methods that are selected for instrumentation. Timing data for methods in an application that interface to a data-tier layer using ADO.NET calls implicitly include the delay in executing any out-of-process calls.

The timing data gathered from TIP explicitly identifies and measures out-of-process delays separately. The delay measured by tier interaction profiling should be a subset of the overall time spent inside the method, as measured during an instrumented profiling run. With this understanding, you should be able to match the timing data from tier interaction profiling against the timing data gathered at the method level in an instrumented profile to pinpoint the method where the call to the data layer originated.

If instrumentation at the method level is sufficient to allow you to figure out where in the application any ADO.NET calls originate, then instrumented profiling can be used without hesitation. However, instrumented profiles are usually much more intrusive than sampling profiles, incur greater overhead, and tend to generate significantly larger .vsp collection files. In addition, if your application uses methods that make multiple calls to ADO.NET functions, the data gathered by instrumentation will only help you navigate to the method level, not differentiate among multiple ADO.NET calls embedded inside a single method.

More Data, Please

Building multi-tiered applications is a design pattern that promotes reliability and scalability, but does lead to performance-monitoring challenges when application components execute on different machines.

A simple view of a multi-tiered application that doesn’t encompass all of its interconnected layers can’t provide a complete picture of its performance. As you’ve seen, TIP can supply key timing data that’s otherwise missing. As the example in this article suggests, this timing data can be augmented with other performance data from standard data base administrator tools.


Mark Friedman is an Architect on the Visual Studio Ultimate team at Microsoft. He’s the author of two books on Windows performance and blogs periodically about performance issues at blogs.msdn.com/ddperf/.

Thanks to the following technical experts for reviewing this article: Daryush Laqab and Chris Schmich