.NET

The CLR's Thread Pool

Jeffrey Richter

Contents

The Birth of the Thread Pool
Capability 1: Calling a Method Asynchronously
Capability 2: Calling a Method at Timed Intervals
Capability 3: Calling a Method When a Single Kernel Object Becomes Signaled
Conclusion

Microsoft is always trying to improve the performance of its platforms and applications. Many years ago, Microsoft researched how threads were being used by application developers to see what could be done to improve their use. Out of this research came a very important discovery: developers frequently created a new thread to perform a single task and when the task was complete, the thread would die.

This pattern is extremely common in a server application. A client makes a request of the server, the server creates a thread to process the client's request, and then, when the client's request is complete, the server's thread would die. Compared to a process, creating and destroying a thread is fast and uses fewer OS resources. But creating and destroying threads is certainly not free.

To create a thread, a kernel object is allocated and initialized, the thread's stack memory is allocated and initialized, and Windows® sends every DLL in the process a DLL_THREAD_ATTACH notification, causing pages from disk to be faulted into memory so that code can execute. When a thread dies, every DLL is sent a DLL_THREAD_DETACH notification, the thread's stack memory is freed, and the kernel object is freed (if its usage count goes to 0). So, there is a lot of overhead associated with creating and destroying a thread that has nothing to do with the work that the thread was created to perform in the first place.

The Birth of the Thread Pool

The result of this study led Microsoft to implement a thread pool, which first appeared in Windows 2000. When a Microsoft® .NET Framework team was designing and building the common language runtime (CLR), they decided to implement a thread pool right into the CLR itself. This way, any managed application could take advantage of a thread pool even if the application was running on a version of Windows prior to Windows 2000 (such as Windows 98).

When the CLR initializes, its thread pool contains no threads. When the application wants to create a thread to perform a task, the application should request the task be performed by a thread pool thread. The thread pool knows that and will create an initial thread. This new thread will go through the same initialization as any other thread; but, when the task is complete, the thread will not destroy itself. Instead, the thread will return to the thread pool in a suspended state. If the application makes another request of the thread pool, then the suspended thread will just wake up and perform the task and a new thread will not be created. This saves a lot of overhead. As long as the application queues tasks to the thread pool no faster than the one thread can handle each task, the same thread gets reused over and over again saving an enormous amount of overhead over the app's lifetime.

Now, if the application queues up tasks for the thread pool faster than the one thread can handle it, then the thread pool will create additional threads. Of course, creating new threads does generate overhead, but it is very likely that the application will require just a few threads to handle all of the tasks thrown at it over the application's lifetime. So, overall, the application's performance improves by using the thread pool.

Now, you might be wondering what happens if the thread pool contains many threads and the workload on the application diminishes. In this case, the thread pool contains several threads that are sitting suspended for long periods of time, wasting OS resources. Microsoft thought about this, too. When a thread pool thread suspends itself, it waits for 40 seconds. If 40 seconds elapses and the thread is given nothing to do, then the thread wakes up and destroys itself, freeing all the OS resources (stack, kernel object, and so forth) that it was using. Also, it probably doesn't hurt performance to have the thread wake up and destroy itself because the application can't be doing too much anyway or the thread would have resumed execution. By the way though, I said that threads in the thread pool wake themselves up in 40 seconds, the actual amount of time is not documented and is subject to change.

The cool thing about a thread pool is that it is heuristic. If your application needs to perform many tasks, then the thread pool creates more threads. If your application's work load dies down, then the thread pool threads kill themselves. The thread pool's algorithms ensure that it contains as many threads as required by the workload placed on it!

So, hopefully, you now understand the general concept behind a thread pool and see the performance advantages that it can offer. At this time, I'd like to show you some code demonstrating how to use the thread pool. First, you should know that the thread pool offers four capabilities:

  • Calling a method asynchronously
  • Calling a method at a timed interval
  • Calling a method when a single kernel object is signaled
  • Calling a method when an asynchronous I/O request completes

The first three capabilities are quite useful and I will demonstrate them in this column. However, the fourth capability is very rarely used by application developers, so I will not demonstrate it here; perhaps I'll cover it in a future column.

Capability 1: Calling a Method Asynchronously

In your application, if you have code in which you create a new thread to perform a task, I recommend that you replace that code with new code that directs the thread pool to perform the task instead. In fact, you'll generally find that it is easier to have the thread pool perform a task then it is to have a new, dedicated thread.

To queue a task for the thread pool, you use the ThreadPool class defined in the System.Threading namespace. The ThreadPool class offers only static methods and no instance of it can be constructed. To have a thread pool thread call a method asynchronously, your code must call one of ThreadPool's overloaded QueueUserWorkItem methods, as shown here:

public static Boolean QueueUserWorkItem(WaitCallback wc, Object state);
public static Boolean QueueUserWorkItem(WaitCallback wc); 

These methods queue a "work item" (and optional state data) to a thread in the thread pool and return immediately. A work item is simply a method (identified by the wc parameter) that is called and passed a single parameter, state (the state data). The version of QueueUserWorkItem without the state parameter passes null to the callback method. Eventually, some thread in the pool will process the work item, causing your method to be called. The callback method you write must match the System.Threading.WaitCallback delegate type, which is defined as follows:

public delegate void WaitCallback(Object state);

Notice that you never call any method that creates a thread yourself; the CLR's thread pool will automatically create a thread, if necessary, and reuse an exiting thread if possible. Also, this thread is not immediately destroyed after it processes the callback method; it goes back into the thread pool so that it is ready to handle any other work items in the queue. Using QueueUserWorkItem might make your application more efficient because you won't be creating and destroying threads for every single client request.

The code in Figure 1 demonstrates how to have the thread pool call a method asynchronously.

Figure 1 Thread Pool Calls a Method

using System;
using System.Threading;

class App {
   static void Main() {
      Console.WriteLine("Main thread: Queuing an aynchronous 
          operation.");
      ThreadPool.QueueUserWorkItem(new WaitCallback(MyAsyncOperation));

      Console.WriteLine("Main thread: Performing other operations.");
      // ...

      Console.WriteLine("Main thread: Pausing to simulate doing other 
          operations.");
      Console.ReadLine();
   }

   // The callback method's signature MUST match that of a 
   // System.Threading.WaitCallback delegate (it takes an 
   // Object parameter and returns void)
   static void MyAsyncOperation(Object state) {
      Console.WriteLine("ThreadPool thread: Performing aynchronous 
          operation.");
      // ...
      Thread.Sleep(5000);    // Sleep for 5 seconds to simulate doing 
                             // work

      // Returning from this method causes the thread to 
      // suspend itself waiting for another task
   }
}

Capability 2: Calling a Method at Timed Intervals

If your application needs to perform a certain task at a certain time or if your application needs to execute some method periodically, the thread pool is the perfect thing for you to use. The System.Threading namespace defines the Timer class. When you construct an instance of the Timer class, you are telling the thread pool that you want a method of yours called back at a particular time in the future. The Timer class offers four constructors:

public Timer(TimerCallback callback, Object state,
   Int32 dueTime, Int32 period);
public Timer(TimerCallback callback, Object state,
   UInt32 dueTime, UInt32 period);
public Timer(TimerCallback callback, Object state,
   Int64 dueTime, Int64 period);
public Timer(TimerCallback callback, Object state,
   Timespan dueTime, TimeSpan period); 

All four constructors construct a Timer object identically. The callback parameter identifies the method that you want called back by a thread pool thread. Of course, the callback method you write must match the System.Threading.TimerCallback delegate type, which is defined as follows:

public delegate void TimerCallback(Object state);

The constructor's state parameter allows you to pass state data to the callback method; you can pass null if you have no state data to pass. You use the dueTime parameter to tell the thread pool how many milliseconds to wait before calling your callback method for the very first time. You can specify the number of milliseconds using a signed or unsigned 32-bit value, a signed 64-bit value, or a TimeSpan value. If you want the callback method called immediately, specify 0 for the dueTime parameter. The last parameter, period, allows you to specify how long, in milliseconds, to wait before each successive call. If you pass 0 for this parameter, then the thread pool will call the callback method just once.

After constructing a Timer object, the thread pool knows what to do and monitors the time automatically for you. However, the Timer class offers some additional methods allowing you to communicate with the thread pool to modify when (or if) the method should be called back. Specifically, the Timer class offers several Change and Dispose methods:

public Boolean Change(Int32    dueTime, Int32    period);
public Boolean Change(UInt32   dueTime, UInt32   period);
public Boolean Change(Int64    dueTime, Int64    period);
public Boolean Change(TimeSpan dueTime, TimeSpan period); 
public Boolean Dispose();
public Boolean Dispose(WaitHandle notifyObject); 

The Change method allows you to change the Timer object's due time and period. The Dispose method allows you to cancel the callback altogether and optionally signal the kernel object identified by the notifyObject parameter when all pending callbacks for the time have completed.

The code in Figure 2 demonstrates how to have a thread pool thread call a method immediately and every 2000 milliseconds (or two seconds) thereafter.

Figure 2 Using the Period Parameter

using System;
using System.Threading;

class App {
   static void Main() {
      Console.WriteLine("Checking for status updates every 2 seconds.");
      Console.WriteLine("   (Hit Enter to terminate the sample)");
      Timer timer = new Timer(new TimerCallback(CheckStatus), null, 0, 
                              2000);
      Console.ReadLine();
   }

   // The callback method's signature MUST match that of a 
   // System.Threading.TimerCallback delegate (it takes an 
   // Object parameter and returns void)
   static void CheckStatus(Object state) {
      Console.WriteLine("Checking Status.");
      // ...
   }
}

Capability 3: Calling a Method When a Single Kernel Object Becomes Signaled

While doing their performance studies, Microsoft researchers discovered that many applications spawn threads simply to wait for a single kernel object to become signaled. Once the object is signaled, the thread posts some sort of notification to another thread and then loops back, waiting for the object to signal again. Some developers even write code in which several threads each wait on a single object. This is a big waste of system resources. So, if you currently have threads in your application that wait for single kernel objects to become signaled, then the thread pool is, again, the perfect resource for you to increase your application's performance.

To have a thread pool thread call your callback method when a kernel object becomes signaled, you again use some static methods defined in the System.Threading.ThreadPool class. To have a thread pool thread call a method when a kernel object becomes signaled, your code must call one of the overloaded RegisterWaitHandle methods you see in Figure 3.

Figure 3 RegisterWaitHandle Methods

public static RegisterWaitHandle RegisterWaitForSingleObject(
   WaitHandle h, WaitOrTimerCallback callback, Object state, 
   UInt32 milliseconds, Boolean executeOnlyOnce);

public static RegisterWaitHandle RegisterWaitForSingleObject(
   WaitHandle h, WaitOrTimerCallback callback, Object state, 
   Int32 milliseconds, Boolean executeOnlyOnce);

public static RegisterWaitHandle RegisterWaitForSingleObject(
   WaitHandle h, WaitOrTimerCallback callback, Object state, 
   TimeSpan milliseconds, Boolean executeOnlyOnce);

public static RegisterWaitHandle RegisterWaitForSingleObject(
   WaitHandle h, WaitOrTimerCallback callback, Object state,
   Int64 milliseconds, Boolean executeOnlyOnce); 

When you call one of these methods, the h parameter identifies the kernel object that you want the thread pool to wait on. Since this parameter is of the abstract base class System.Threading.WaitHandle, you can specify any class derived from this base class. Specifically, you can pass a reference to an AutoResetEvent, ManualResetEvent, or Mutex object. The second parameter, callback, identifies the method that you want the thread pool thread to call. The callback method that you implement must match the System.Threading.WaitOrTimerCallback delegate type, which is defined in the following line of code:

public delegate void WaitOrTimerCallback(Object state,
                                         Boolean timedOut);

The third parameter, state, allows you to specify some state data that should be passed to the callback method; pass null if you have no special state data to pass. The fourth parameter, milliseconds, allows you to tell the thread pool how long it should wait for the kernel object to become signaled. It is common to pass -1 here to indicate an infinite timeout. If the last parameter, executeOnlyOnce, is true, then a thread pool thread will execute the callback method just once. But, if executeOnlyOnce is false, then a thread pool thread will execute the callback method every time the kernel object is signaled. This is most useful with an AutoResetEvent object.

When the callback method is called, it is passed state data and a Boolean value, timedOut. If timedOut is false, then the method knows that it is being called because the kernel object became signaled. If timedOut is true, then the method knows it is being called because the kernel object did not become signaled in the time specified. The callback method should perform whatever action is necessary.

In the prototypes shown earlier, you'll notice that the RegisterWaitForSingleObject method returns a RegisteredWaitHandle object. This object identifies the kernel object that the thread pool is waiting on. If, for some reason, your application wants to tell the thread pool to stop watching the registered wait handle, your application can call RegisteredWaitHandle's Unregister method:

public Boolean Unregister(WaitHandle waitObject);

The waitObject parameter indicates how you want to be notified when all queued work items have executed. You should pass null for this parameter if you don't want a notification. If you pass a valid reference to a WaitHandle-derived object, then the thread pool will signal the object when all pending work items for the registered wait handle have executed.

The code in Figure 4 demonstrates how to have a thread pool thread call a method whenever a kernel object becomes signaled.

Figure 4 Method Called When Object Signaled

using System;
using System.Threading;

class App {
   static void Main() {
      AutoResetEvent are = new AutoResetEvent(false);
      RegisteredWaitHandle rwh = ThreadPool.RegisterWaitForSingleObject(
         are, new WaitOrTimerCallback(EventSignalled), null, -1, false);
      for (Int32 x = 0 ; x < 5; x++) {
         Thread.Sleep(5000);
         are.Set();
      }

      rwh.Unregister(null);
      Console.WriteLine("Hit Enter to terminate the sample");
      Console.ReadLine();
   }

   // The callback method's signature MUST match that of a 
   // System.Threading.WaitOrTimerCallback delegate (it takes an 
   // Object parameter and a Boolean and returns void)
   static void EventSignalled(Object state, Boolean timedOut) {
      if (timedOut) {
         Console.WriteLine("Timed-out while waiting for the 
                           AutoResetEvent.");
      } else {
         Console.WriteLine("The AutoResetEvent became signalled.");
      }
      // ...
   }
}

Conclusion

In this column, I've explained the need for thread pools and demonstrated how to use the various capabilities offered by the CLR's thread pool. By now you should see the value that a thread pool can bring to your own development efforts to improve your application's performance and simplify your own code.

Send your questions and comments for Jeff to  dot-net@microsoft.com.

Jeffrey Richter is a cofounder of Wintellect (https://www.Wintellect.com), a training, debugging, and consulting firm specializing in .NET and Windows technologies. He is the author of Applied Microsoft .NET Framework Programming (Microsoft Press, 2002) and several programming books on Windows.