The role of the Windows Display Driver Model in the DWM

The Problem

Ever since the advent of dedicated graphics processors, even old-school graphics processors that only accelerated GDI blits, the way you would program against them would be similar to how you programmed against the main CPU/memory system before there was virtual memory or interruptible/preemptible processes.  That is, you'd have to be sure to directly manage all the video memory yourself, and just count on not being able to have your graphics instructions interrupted.  Specifically, DirectX applications have always needed to deal with not getting the video memory they need, or deal with "surface lost" messages from video memory that got kicked out for one reason or another.  This puts a major burden on the programmer, and, probably even more importantly, makes for a very poor ecosystem for running multiple video-memory resource intensive applications, because their likelihood of cooperating in a sensible way on resource management is virtually nil.

Well, the DWM is a DirectX application with a couple of unique challenges in this arena:

  • The memory requirements on the DWM vary widely.  That's because they vary directly with the number of windows the user has open, and while there are known typical usage patterns, the user certainly isn't and cannot be limited to N open windows.
  • The DWM operates in an environment where other DirectX applications do operate.  Video playback, WPF applications, windowed games (btw, Vista "inbox" games like Solitaire, etc., are now written in DirectX), etc.  In fact, the DWM is responsible for the final presentation of those applications.  So it's critical that such DirectX applications "play well together" and play well with the DWM. 

The above challenges don't mesh well with the DirectX described in the first paragraph.

Enter WDDM

It's the Windows Display Driver Model (WDDM, formerly known as LDDM) that makes all of this viable.  WDDM is the new DirectX driver model for Windows Vista and beyond.  From the perspective of the DWM it does three main things:

  1. Virtualizes video memory.
  2. Allows interruptibility of the GPU.
  3. Allows DirectX surfaces to be shared across processes.

The surface sharing feature is key for redirection of DirectX applications, but that's the topic of a later post.  Here we're going to discuss the first two.  There are other motivators for, and certainly a lot more details on the WDDM, but those aren't as immediately relevant to the DWM as what's discussed here.

Virtualizing Video Memory

With the WDDM, graphics memory is virtualized.  This means that just like system memory, if there is a demand for memory and the memory is all allocated, then secondary storage is turned to, and the system manages all the paging algorithms and mechanics for faulting in the secondary storage into the primary storage when it needs to be operated on.  In the case of video memory, the primary storage is video memory, and the secondary storage system memory.

In the event that video memory allocation is required, and both video memory and system memory are full, the WDDM and the overall virtual memory system will then turn to disk for video memory surfaces.  This is an extremely unusual case, and the performance would suffer dearly in that case, but the point is that the system is sufficiently robust to allow this to occur and for the application to reliably continue.

The upshot of all of this is that applications don't need to be greedy to get all the memory they need, since they won't be guaranteed true video memory anyhow, and they can always be paged out.  This brings the goal of a cooperative set of DirectX applications much, much closer to reality.  It also means that there are effectively no more "surface lost" messages from DirectX, and no failed allocations.

From the DWM's perspective, this is all absolutely key because the DWM can and will allocate memory, and those memory allocations will be done in conjunction with allocations for other applications on the system, putting the "right" surfaces into the true video memory, and paging in and out as necessary.  Now, naturally, this is a little bit of a naive viewpoint, since this is the first generation of this virtualizer, but we're observing it to be doing quite well, and it will keep improving.

Interruptibility of the GPU

So, memory's virtualized, that's good, but what about those little computrons that run around the GPU doing stuff?  Can one application's GPU commands be preempted by another application?  Prior to WDDM, they could not.  With WDDM, they can be.  This is referred to as WDDM scheduling, and WDDM arbitrates usage of the GPU, giving computation to the different applications requesting it.  In order to do this, WDDM must be able to interrupt a computation going on on the GPU and context switch in a different processes operation.  WDDM defines two levels of interruptibility to support this. 

  • Basic Scheduling - this is the granularity of scheduling achievable in DirectX 9 class WDDM drivers and hardware, and means that an individual primitive and an individual shader program cannot be interrupted, and must run to completion before a context switch.
  • Advanced Scheduling - this is achievable in DirectX 10 class WDDM drivers and hardware, and here the GPU can be interrupted within an individual primitive and within an individual shader program, leading to much finer-grained preemptability.  Note that while DX10 supports advanced scheduling, it's not a requirement for DX10 -- that is, only certain hardware will support it.

The Desktop Window Manager uses DirectX 9, and thus Basic Scheduling.  So it's possible that an application that makes errant use of the GPU and uses complex shader programs across large primitives can potentially glitch the DWM.  We have yet to see such applications, but there no doubt will be some that either do this unintentionally or are built specifically to do this.  Nonetheless, we don't believe that this will be a common issue.