Design and Implementation Guidelines for Web Clients
Retired Content |
---|
This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist. |
Microsoft Corporation
November 2003
Applies to:
Microsoft .NET Framework
ASP.NET
Summary: This chapter describes how to increase performance and responsiveness of the code in the presentation layer by using multithreading and asynchronous programming.
Contents
In This Chapter
Multithreading
Using Asynchronous Operations
Summary
In This Chapter
This chapter describes how to use two closely related mechanisms to enable you to design scaleable and responsive presentation layers for ASP.NET Web applications. The two mechanisms are:
- Multithreading
- Asynchronous programming
Performance and responsiveness are important factors in the success of your application. Users quickly tire of using even the most functional application if it is unresponsive or regularly appears to freeze when the user initiates an action. Even though it may be a back-end process or external service causing these problems, it is the user interface where the problems become evident.
Multithreading and asynchronous programming techniques enable you to overcome these difficulties. The Microsoft .NET Framework class library makes these mechanisms easily accessible, but they are still inherently complex, and you must design your application with a full understanding of the benefits and consequences that these mechanisms bring. In particular, you must keep in mind the following points as you decide whether to use one of these threading techniques in your application:
- More threads does not necessarily mean a faster application. In fact, the use of too many threads has an adverse effect on the performance of your application. For more information, see "Using the Thread Pool" later in this chapter.
- Each time you create a thread, the system consumes memory to hold context information for the thread. Therefore, the number of threads that you can create is limited by the amount of memory available.
- Implementation of threading techniques without sufficient design is likely to lead to overly complex code that is difficult to scale and extend.
- You must be aware of what could happen when you destroy threads in your application, and make sure you handle these possible outcomes accordingly.
- Threading-related bugs are generally intermittent and difficult to isolate, debug, and resolve.
The following sections describe multithreading and asynchronous programming from the perspective of presentation layer design in ASP.NET Web applications. For information about how to use these mechanisms in Windows Forms-based applications, see "Multithreading and Asynchronous Programming in Windows Forms-Based Applications" in the appendix of this guide.
Multithreading
There are many situations where using additional threads to execute tasks allows you to provide your users with better performance and higher responsiveness in your application, including:
- When there is background processing to perform, such as waiting for authorization from a credit-card company in an online retailing Web application
- When you have a one-way operation, such as invoking a Web service to pass data entered by the user to a back-end system
- When you have discrete work units that can be processed independently, such as calling several SQL stored procedures simultaneously to gather information that you require to build a Web response page
Used appropriately, additional threads allow you to avoid your user interface from becoming unresponsive during long-running and computationally intensive tasks. Depending on the nature of your application, the use of additional threads can enable the user to continue with other tasks while an existing operation continues in the background. For example, an online retailing application can display a "Credit Card Authorization In Progress" page in the client's Web browser while a background thread at the Web server performs the authorization task. When the authorization task is complete, the background thread can return an appropriate "Success" or "Failure" page to the client. For an example of how to implement this scenario, see "How to: Execute a Long-Running Task in a Web Application" in Appendix B of this guide.
Note Do not display visual indications of how long it will take for a long-running task to complete. Inaccurate time estimations confuse and annoy users. If you do not know the scope of an operation, distract the user by displaying some other kind of activity indictor, such as an animated GIF image, promotional advertisement, or similar page.
Unfortunately, there is a run-time overhead associated with creating and destroying threads. In a large application that creates new threads frequently, this overhead can affect the overall application performance. Additionally, having too many threads running at the same time can drastically decrease the performance of a whole system as Windows tries to give each thread an opportunity to execute.
Using the Thread Pool
A common solution to the cost of excessive thread creation is to create a reusable pool of threads. When an application requires a new thread, instead of creating one, the application takes one from the thread pool. As the thread completes its task, instead of terminating, the thread returns to the pool until the next time the application requires another thread.
Thread pools are a common requirement in the development of scaleable, high-performance applications. Because optimized thread pools are notoriously difficult to implement correctly, the .NET Framework provides a standard implementation in the System.Threading.ThreadPool class. The thread pool is created the first time you create an instance of the System.Threading.ThreadPool class.
The runtime creates a single thread pool for each run-time process (multiple application domains can run in the same runtime process.) By default, this pool contains a maximum of 25 worker threads and 25 asynchronous I/O threads per processor (these sizes are set by the application hosting the common language runtime).
Because the maximum number of threads in the pool is constrained, all the threads may be busy at some point. To overcome this problem, the thread pool provides a queue for tasks awaiting execution. As a thread finishes a task and returns to the pool, the pool takes the next work item from the queue and assigns it to the thread for execution.
Benefits of Using the Thread Pool
The runtime-managed thread pool is the easiest and most reliable approach to implement multithreaded applications. The thread pool offers the following benefits:
- You do not have to worry about thread creation, scheduling, management, and termination.
- Because the thread pool size is constrained by the runtime, the chance of too many threads being created and causing performance problems is avoided.
- The thread pool code is well tested and is less likely to contain bugs than a new custom thread pool implementation.
- You have to write less code, because the thread start and stop routines are managed internally by the .NET Framework.
The following procedure describes how to use the thread pool to perform a background task in a separate thread.
To use the thread pool to perform a background task
Write a method that has the same signature as the WaitCallback delegate. This delegate is located in the System.Threading namespace, and is defined as follows.
[Serializable] public delegate void WaitCallback(object state);
Create a WaitCallback delegate instance, specifying your method as the callback.
Pass the delegate instance into the ThreadPool.QueueUserWorkItem method to add your task to the thread pool queue. The thread pool allocates a thread for your method from the thread pool and calls your method on that thread.
In the following code, the AuthorizePayment method is executed in a thread allocated from the thread pool.
using System.Threading;
public class CreditCardAuthorizationManager
{
private void AuthorizePayment(object o)
{
// Do work here ...
}
public void BeginAuthorizePayment(int amount)
{
ThreadPool.QueueUserWorkItem(new WaitCallback(AuthorizePayment));
}
}
For a more detailed discussion of the thread pool, see "Programming the Thread Pool in the .NET Framework" on MSDN (https://msdn.microsoft.com/library/default.asp?url=/library/en-us/dndotnet/html/progthrepool.asp).
Limitations of Using the Thread Pool
Unfortunately, the thread pool suffers limitations resulting from its shared nature that may prevent its use in some situations. In particular, these limitations are:
- The .NET Framework also uses the thread pool for asynchronous processing, placing additional demands on the limited number of threads available.
- Even though application domains provide robust application isolation boundaries, code in one application domain can affect code in other application domains in the same process if it consumes all the threads in the thread pool.
- When you submit a work item to the thread pool, you do not know when a thread becomes available to process it. If the application makes particularly heavy use of the thread pool, it may be some time before the work item executes.
- You have no control over the state and priority of a thread pool thread.
- The thread pool is unsuitable for processing simultaneous sequential operations, such as two different execution pipelines where each pipeline must proceed from step to step in a deterministic fashion.
- The thread pool is unsuitable when you need a stable identity associated with the thread, for example if you want to use a dedicated thread that you can discover by name, suspend, or abort.
In situations where use of the thread pool is inappropriate, you can create new threads manually. Manual thread creation is significantly more complex than using the thread pool, and it requires you to have a deeper understanding of the thread lifecycle and thread management. A discussion of manual thread creation and management is beyond the scope of this guide. For more information, see "Threading" in the ".NET Framework Developer's Guide" on MSDN (https://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpguide/html/cpconthreading.asp).
Synchronizing Threads
If you use multiple threads in your applications, you must address the issue of thread synchronization. Consider the situation where you have one thread iterating over the contents of a hash table and another thread that tries to add or delete hash table items. The thread that is performing the iteration is having the hash table changed without its knowledge; this causes the iteration to fail.
The ideal solution to this problem is to avoid shared data. In some situations, you can structure your application so that threads do not share data with other threads. This is generally possible only when you use threads to execute simple one-way tasks that do not have to interact or share results with the main application. The thread pool described earlier in this chapter is particularly suited to this model of execution.
Synchronizing Threads by Using a Monitor
It is not always feasible to isolate all the data a thread requires. To get thread synchronization, you can use a Monitor object to serialize access to shared resources by multiple threads. In the hash table example cited earlier, the iterating thread would obtain a lock on the Hashtable object using the Monitor.Enter method, signaling to other threads that it requires exclusive access to the Hashtable. Any other thread that tries to obtain a lock on the Hashtable waits until the first thread releases the lock using the Monitor.Exit method.
The use of Monitor objects is common, and both Visual C# and Visual Basic .NET include language level support for obtaining and releasing locks:
In C#, the lock statement provides the mechanism through which you obtain the lock on an object as shown in the following example.
lock (myHashtable) { // Exclusive access to myHashtable here... }
In Visual Basic .NET, the SyncLock and End SyncLock statements provide the mechanism through which you obtain the lock on an object as shown in the following example.
SyncLock (myHashtable) ' Exclusive access to myHashtable here... End SyncLock
When entering the lock (or SyncLock) block, the static (Shared in Visual Basic .NET) System.Monitor.Enter method is called on the specified expression. This method blocks until the thread of execution has an exclusive lock on the object returned by the expression.
The lock (or SyncLock) block is implicitly contained by a try statement whose finally block calls the static (or Shared) System.Monitor.Exit method on the expression. This ensures the lock is freed even when an exception is thrown. As a result, it is invalid to branch into a lock (or SyncLock) block from outside of the block.
For more information about the Monitor class, see "Monitor Class" in the ".NET Framework Class Library" on MSDN (https://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpref/html/frlrfsystemthreadingmonitorclasstopic.asp).
Using Alternative Thread Synchronization Mechanisms
The .NET Framework provides several other mechanisms that enable you to synchronize the execution of threads. These mechanisms are all exposed through classes in the System.Threading namespace. The mechanisms relevant to the presentation layer are listed in Table 6.1.
Table 6.1: Thread Synchronization Mechanisms
Mechanism | Description | Links for More Information |
---|---|---|
ReaderWriterLock | Defines a lock that implements single-writer/multiple-reader semantics; this allows many readers, but only a single writer, to access a synchronized object. Used where classes do much more reading than writing. |
https://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpref/html/frlrfsystemthreadingreaderwriterlockclasstopic.asp |
AutoResetEvent | Notifies one or more waiting threads that an event has occurred. When the AutoResetEvent transitions from a non-signaled to signaled state, it allows only a single waiting thread to resume execution before reverting to the non-signaled state. |
https://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpref/html/frlrfsystemthreadingautoreseteventclasstopic.asp |
ManualResetEvent | Notifies one or more waiting threads that an event has occurred. When the ManualResetEvent transitions from a non-signaled to signaled state, all waiting threads are allowed to resume execution. |
https://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpref/html/frlrfsystemthreadingmanualreseteventclasstopic.asp |
Mutex | A Mutex can have a name; this allows threads in other processes to synchronize on the Mutex; only one thread can own the Mutex at any particular time providing a machine-wide synchronization mechanism. Another thread can obtain the Mutex when the owner releases it. Principally used to make sure only a single application instance can be run at the same time. |
https://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpref/html/frlrfsystemthreadingmutexclasstopic.asp |
With such a rich selection of synchronization mechanisms available to you, you must plan your thread synchronization design carefully and consider the following points:
- It is a good idea for threads to hold locks for the shortest time possible. If threads hold locks for long periods of time, the resulting thread contention can become a major bottleneck, negating the benefits of using multiple threads in the first place.
- Be careful about introducing deadlocks caused by threads waiting for locks held by other threads. For example, if one thread holds a lock on object A and waits for a lock on object B, while another thread holds a lock on object B, but waits to lock object A, both threads end up waiting forever.
- If for some reason an object is never unlocked, all threads waiting for the lock end up waiting forever. The lock (C#) and SyncLock (Visual Basic .NET) statements make sure that a lock is always released even if an exception occurs. If you use Monitor.Enter manually, you must make sure that your code calls Monitor.Exit.
Using multiple threads can significantly enhance the performance of your presentation layer components, but you must make sure you pay close attention to thread synchronization issues to prevent locking problems.
Troubleshooting
The difficulties in identifying and resolving problems in multi-threaded applications occur because the CPU's scheduling of threads is non-deterministic; you cannot reproduce the exact same code execution sequence across multiple test runs. This means that a problem may occur one time you run the application, but it may not occur another time you run it. To make things worse, the steps you typically take to debug an application—such as using breakpoints, stepping through code, and logging—change the threading behavior of a multithreaded program and frequently mask thread-related problems. To resolve thread-related problems, you typically have to set up long-running test cycles that log sufficient debug information to allow you to understand the problem when it occurs.
Note For more in-depth information about debugging, see "Production Debugging for .NET Framework Applications" on MSDN (https://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnbda/html/DBGrm.asp).
Using Asynchronous Operations
Some operations take a long time to complete. These operations generally fall into two categories:
- I/O bound operations such as calling SQL Server, calling a Web service, or calling a remote object using .NET Framework remoting
- CPU-bound operations such as sorting collections, performing complex mathematical calculations, or converting large amounts of data
The use of additional threads to execute long running tasks is a common way to maintain responsiveness in your application while the operation executes. Because threads are used so frequently to overcome the problem of long running processes, the .NET Framework provides a standardized mechanism for the invocation of asynchronous operations that saves you from working directly with threads.
Typically, when you invoke a method, your application blocks until the method is complete; this is known as synchronous invocation. When you invoke a method asynchronously, control returns immediately to your application; your application continues to execute while the asynchronous operation executes independently. Your application either monitors the asynchronous operation or receives notification by way of a callback when the operation is complete; this is when your application can obtain and process the results.
The fact that your application does not block while the asynchronous operation executes means the application can perform other processing. The approach you use to invoke the asynchronous operation (discussed in the next section) determines how much scope you have for processing other tasks while waiting for the operation to complete.
Using the .NET Framework Asynchronous Execution Pattern
The .NET Framework allows you to execute any method asynchronously using the asynchronous execution pattern. This pattern involves the use of a delegate and three methods named Invoke, BeginInvoke, and EndInvoke.
The following example declares a delegate named AuthorizeDelegate. The delegate specifies the signature for methods that perform credit card authorization.
public delegate int AuthorizeDelegate(string creditcardNumber,
DateTime expiryDate,
double amount);
When you compile this code, the compiler generates Invoke, BeginInvoke, and EndInvoke methods for the delegate. Figure 6.1 shows how these methods appear in the IL Disassembler.
Figure 6.1. MSIL signatures for the Invoke, BeginInvoke, and EndInvoke methods in a delegate
The equivalent C# signatures for these methods are as follows.
// Signature of compiler-generated BeginInvoke method
public IAsyncResult BeginInvoke(string creditcardNumber,
DateTime expiryDate,
double amount,
AsyncCallback callback,
object asyncState);
// Signature of compiler-generated EndInvoke method
public int EndInvoke(IAsyncResult ar);
// Signature of compiler-generated Invoke method
public int Invoke(string creditcardNumber,
DateTime expiryDate,
double amount);
The following sections describe the BeginInvoke, EndInvoke, and Invoke methods, and clarify their role in the asynchronous execution pattern. For full details on how to use the asynchronous execution pattern, see "Including Asynchronous Calls" in the ".NET Framework Developer's Guide" on MSDN (https://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpguide/html/cpconasynchronousprogramming.asp).
Performing Synchronous Execution with the Invoke Method
The Invoke method synchronously executes the method referenced by the delegate instance. If you call a method by using Invoke, your code blocks until the method returns.
Using Invoke is similar to calling the referenced method directly, but there is one significant difference. The delegate simulates synchronous execution by calling BeginInvoke and EndInvoke internally. Therefore your method is executed in the context of a different thread to the calling code, even though the method appears to execute synchronously. For more information, see the description of BeginInvoke in the next section.
Initiating Asynchronous Operations with the BeginInvoke Method
The BeginInvoke method initiates the asynchronous execution of the method referenced by the delegate instance. Control returns to the calling code immediately, and the method referenced by the delegate executes independently in the context of a thread from the runtime's thread pool.
The "Multithreading" section earlier in this chapter describes the thread pool in detail; however, it is worth highlighting the consequences of using a separate thread, and in particular one drawn from the thread pool:
- The runtime manages the thread pool. You have no control over the scheduling of the thread, nor can you change the thread's priority.
- The runtime's thread pool contains 25 threads per processor. If you invoke asynchronous operations too liberally, you can easily exhaust the pool causing the runtime to queue excess asynchronous operations until a thread becomes available.
- The asynchronous method runs in the context of a different thread to the calling code. This causes problems when asynchronous operations try to update Windows Forms components.
The signature of the BeginInvoke method includes the same arguments as those specified by the delegate signature. It also includes two additional arguments to support asynchronous completion:
- callback argument–Specifies an AsyncCallback delegate instance. If you specify a non-null value for this argument, the runtime calls the specified callback method when the asynchronous method completes. If this argument is a null reference, you must monitor the asynchronous operation to determine when it is complete. For more information, see "Managing Asynchronous Completion with the EndInvoke Method" later in this chapter.
- asyncState argument–Takes a reference to any object. The asynchronous method does not use this object, but it is available to your code when the method completes; this allows you to associate useful state information with an asynchronous operation. For example, this object allows you to map results against initiated operations in situations where you initiate many asynchronous operations that use a common callback method to perform completion.
The IAsyncResult object returned by BeginInvoke provides a reference to the asynchronous operation. You can use the IAsyncResult object for the following purposes:
- Monitor the status of an asynchronous operation
- Block execution of the current thread until an asynchronous operation completes
- Obtain the results of an asynchronous operation using the EndInvoke method
The following procedure shows how to invoke a method asynchronously by using the BeginInvoke method.
To invoke a method asynchronously by using BeginInvoke
- Declare a delegate with a signature to match the method you want to execute.
- Create a delegate instance containing a reference to the method you want to execute.
- Execute the method asynchronously by calling the BeginInvoke method on the delegate instance you just created.
The following code fragment demonstrates the implementation of these steps. The example also shows how to register a callback method; this method is called automatically when the asynchronous method completes. For more information about defining callback methods and other possible techniques for dealing with asynchronous method completion, see "Managing Asynchronous Completion with the EndInvoke Method" later in this chapter.
public class CreditCardAuthorizationManager
{
// Delegate, defines signature of method(s) you want to execute asynchronously
public delegate int AuthorizeDelegate(string creditcardNumber,
DateTime expiryDate,
double amount);
// Method to initiate the asynchronous operation
public void StartAuthorize()
{
AuthorizeDelegate ad = new AuthorizeDelegate(AuthorizePayment);
IAsyncResult ar = ad.BeginInvoke(creditcardNumber,
expiryDate,
amount,
new AsyncCallback(AuthorizationComplete),
null);
}
// Method to perform a time-consuming operation (this method executes
// asynchronously on a thread from the thread pool)
private int AuthorizePayment(string creditcardNumber,
DateTime expiryDate,
double amount)
{
int authorizationCode = 0;
// Open connection to Credit Card Authorization Service ...
// Authorize Credit Card (assigning the result to authorizationCode) ...
// Close connection to Credit Card Authorization Service ...
return authorizationCode;
}
// Method to handle completion of the asynchronous operation
public void AuthorizationComplete(IAsyncResult ar)
{
// See "Managing Asynchronous Completion with the EndInvoke Method"
// later in this chapter.
}
}
The following section describes all the possible ways to manage asynchronous method completion.
Managing Asynchronous Completion with the EndInvoke Method
In most situations, you will want to obtain the return value of an asynchronous operation that you initiated. To obtain the result, you must know when the operation is complete. The asynchronous execution pattern provides the following mechanisms to determine whether an asynchronous operation is complete:
- Blocking–This is rarely used because it provides few advantages over synchronous execution. One use for blocking is to perform impersonation on a different thread. It is never used for parallelism.
- Polling–It is generally a good idea to not use this because it is inefficient; use waiting or callbacks instead.
- Waiting–This is typically used for displaying a progress or activity indicator during asynchronous operations.
- Callbacks–These provide the most flexibility; this allows you to execute other functionality while an asynchronous operation executes.
The process involved in obtaining the results of an asynchronous operation varies depending on the method of asynchronous completion you use. However, eventually you must call the EndInvoke method of the delegate. The EndInvoke method takes an IAsyncResult object that identifies the asynchronous operation to obtain the result from. The EndInvoke method returns the data that you would receive if you called the original method synchronously.
The following sections explore each approach to asynchronous method completion in more detail.
Using the Blocking Approach
To use blocking, call EndInvoke on the delegate instance and pass the IAsyncResult object representing an incomplete asynchronous operation. The calling thread blocks until the asynchronous operation completes. If the operation is already complete, EndInvoke returns immediately.
The following code sample shows how to invoke a method asynchronously, and then block until the method has completed.
AuthorizeDelegate ad = new AuthorizeDelegate(AuthorizePayment);
IAsyncResult ar = ad.BeginInvoke(creditcardNumber, // 1st param into async method
expiryDate, // 2nd param into async method
amount, // 3rd param into async method
null, // No callback
null); // No additional state
// Block until the asynchronous operation is complete
int authorizationCode = ad.EndInvoke(ar);
The use of blocking might seem a strange approach to asynchronous completion, offering the same functionality as a synchronous method call. However, occasionally blocking is a useful approach because you can decide when your thread enters the blocked state as opposed to synchronous execution; synchronous execution blocks immediately. Blocking can be useful if the user initiates an asynchronous operation after which there are a limited number of steps or operations they can perform before the application must have the result of the asynchronous operation.
Using the Polling Approach
To use polling, write a loop that repeatedly tests the completion state of an asynchronous operation using the IsCompleted property of the IAsyncResult object.
The following code sample shows how to invoke a method asynchronously, and then poll until the method completes.
AuthorizeDelegate ad = new AuthorizeDelegate(AuthorizePayment);
IAsyncResult ar = ad.BeginInvoke(creditcardNumber, // 1st param into async method
expiryDate, // 2nd param into async method
amount, // 3rd param into async method
null, // No callback
null); // No additional state
// Poll until the asynchronous operation completes
while (!ar.IsCompleted)
{
// Do some other work...
}
// Get the result of the asynchronous operation
int authorizationCode = ad.EndInvoke(ar);
Polling is a simple but inefficient approach that imposes major limitations on what you can do while the asynchronous operation completes. Because your code is in a loop, the user's workflow is heavily restricted, providing few benefits over synchronous method invocation. Polling is really only suitable for displaying a progress indicator on smart client applications during short asynchronous operations. Generally, it is a good idea to avoid using polling and look instead to using waiting or callbacks.
Using the Waiting Approach
Waiting is similar to blocking, but you can also specify a timeout value after which the thread resumes execution if the asynchronous operation is still incomplete. Using waiting with timeouts in a loop provides functionality similar to polling, but it is more efficient because the runtime places the thread in a CPU efficient sleep instead of using a code level loop.
To use the waiting approach, you use the AsyncWaitHandle property of the IAsyncResult object. The AsyncWaitHandle property returns a WaitHandle object. Call the WaitOne method on this object to wait for a single asynchronous operation to complete.
The following code sample shows how to invoke a method asynchronously, and then wait for a maximum of 2 seconds for the method to complete.
AuthorizeDelegate ad = new AuthorizeDelegate(AuthorizePayment);
IAsyncResult ar = ad.BeginInvoke(creditcardNumber, // 1st param into async method
expiryDate, // 2nd param into async method
amount, // 3rd param into async method
null, // No callback
null); // No additional state
// Wait up to 2 seconds for the asynchronous operation to complete
WaitHandle waitHandle = ar.AsyncWaitHandle;
waitHandle.WaitOne(2000, false);
// If the asynchronous operation completed, get its result
if (ar.IsCompleted)
{
// Get the result of the asynchronous operation
int authorizationCode = ad.EndInvoke(ar);
...
}
Despite the advantages, waiting imposes the same limitations as polling—the functionality available to the user is restricted because you are in a loop, even though it is an efficient one. Waiting is useful if you want to show a progress or activity indicator when executing long-running processes that must complete before the user can proceed.
Another advantage of waiting is that you can use the static methods of the System.Threading.WaitHandle class to wait on a set of asynchronous operations. You can wait either for the first one to complete (using the WaitAny method) or for them all to complete (using the WaitAll method). This is very useful if you initiate a number of asynchronous operations at the same time and have to coordinate the execution of your application based on the completion of one or more of these operations.
Using Callbacks
When you specify an AsyncCallback delegate instance in the BeginInvoke method, you do not have to actively monitor the asynchronous operation for completion. Instead, when the operation completes, the runtime calls the method referenced by the AsyncCallback delegate and passes an IAsyncResult object identifying the completed operation. The runtime executes the callback method in the context of a thread from the runtime's thread pool.
The following code sample shows how to invoke a method asynchronously, and specify a callback method that will be called on completion.
AuthorizeDelegate ad = new AuthorizeDelegate(AuthorizePayment);
IAsyncResult ar = ad.BeginInvoke(creditcardNumber,
expiryDate,
amount,
new AsyncCallback(AuthorizationComplete),
null);
...
// Method to handle completion of the asynchronous operation
public void AuthorizationComplete(IAsyncResult ar)
{
// Retrieve the delegate that corresponds to the asynchronous method
AuthorizeDelegate ad = (AuthorizeDelegate)((AsyncResult)ar).AsyncDelegate;
// Get the result of the asynchronous method
int authorizationCode = ad.EndInvoke(ar);
}
}
The great benefit of using callbacks is that your code is completely free to continue with other processes, and it does not constrain the workflow of the application user. However, because the callback method executes in the context of another thread, you face the same threading issues highlighted earlier in the discussion of the BeginInvoke method.
Using Built-In Asynchronous I/O Support
I/O is a situation where you frequently use asynchronous method calls. Because of this, many .NET Framework classes that provide access to I/O operations expose methods that implement the asynchronous execution pattern. This saves you from declaring and instantiating delegates to execute the I/O operations asynchronously. The following list identifies the most common scenarios where you would use asynchronous I/O in your presentation layer and provides a link to a document where you can find implementation details:
- Consuming XML Web services:
- Calling methods on remote objects using .NET Framework remoting:
- File access:
- Network communications:
- https://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpguide/html/cpconmakingasynchronousrequests.asp
- https://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpguide/html/cpconusingnon-blockingclientsocket.asp
- https://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpguide/html/cpconusingnon-blockingserversocket.asp
- Microsoft message queue:
Using the built-in asynchronous capabilities of the .NET Framework makes the development of asynchronous solutions easier than it would be to explicitly create delegates to implement asynchronous operations.
Summary
Application performance and scalability can be greatly enhanced using multithreading and asynchronous operations. Wherever possible, try to use these techniques to increase the responsiveness of your presentation layer components.
Retired Content |
---|
This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist. |