Using the Video Mixer Controls

[The component described on this page, Enhanced Video Renderer, is a legacy feature. It has been superseded by the Simple Video Renderer (SVR) exposed through the MediaPlayer and IMFMediaEngine components. To play video content you should send data into one of these components and allow them to instantiate the new video renderer. These components have been optimized for Windows 10 and Windows 11. Microsoft strongly recommends that new code use MediaPlayer or the lower level IMFMediaEngine APIs to play video media in Windows instead of the EVR, when possible. Microsoft suggests that existing code that uses the legacy APIs be rewritten to use the new APIs if possible.]

The EVR mixer provides several interfaces that an application can use to control how the mixer processes video. These interfaces can be used in either DirectShow or Media Foundation.

Interface Description
IMFVideoMixerBitmap interface Alpha-blends a static bitmap image onto the video.
IMFVideoMixerControl interface Controls how the EVR mixes video substreams.
IMFVideoProcessor interface Controls color adjustment, image filters, and other video processing capabilities. This interface provides access to functionality implemented by the graphics driver, so the exact capabilities will depend on the user's graphics driver.

 

The correct way to get pointers to these interfaces depends on whether you are using the DirectShow version of the EVR or the Media Foundation version. For the Media Foundation EVR, it also depends on whether you are using the EVR directly or using it through the Media Session. (Typically an application will use the EVR through the Media Session, not directly).

To get a pointer to any of these interfaces, do the following:

  1. Get a pointer to the IMFGetService interface on the EVR.

    • If you are using the DirectShow EVR filter, call QueryInterface on the filter.

    • If you are using the EVR media sink directly, call QueryInterface on the media sink.

    • If you are using the Media Session, call QueryInterface on the Media Session.

  2. If you are using the Media Session, wait for the Media Session to send the MESessionTopologyStatus event with a status value of MF_TOPOSTATUS_READY. (Skip this step if you are not using the Media Session.)

  3. Call IMFGetService::GetService to get the mixer interface. Use the service identifier MR_VIDEO_MIXER_SERVICE.

Alpha Blending a Bitmap onto the Video

You can use the IMFVideoMixerBitmap interface to alpha blend a static bitmap onto the video during playback. You can store the bitmap in a Direct3D surface, specified as an IDirect3DSurface9 pointer, or use a GDI bitmap.

If you use a Direct3D surface for the bitmap, the surface can contain per-pixel alpha data, which will be used when the mixer alpha-blends the image. Alternatively, you can define a color key—that is, a single color that will be transparent wherever it appears in the bitmap. Also, you can specify an alpha value that will be applied to the entire image. You can also set a source rectangle to crop the bitmap, and a destination rectangle to position the bitmap within the video frame.

To set the bitmap, call IMFVideoMixerBitmap::SetAlphaBitmap. This method takes a pointer to an MFVideoAlphaBitmap structure that specifies the bitmap and the alpha-blending parameters. For example code, see the reference topic for the SetAlphaBitmap method.

After you set the bitmap, you can update the blending parameters, including the source and destination rectangles, by calling IMFVideoMixerBitmap::UpdateAlphaBitmapParameters. The update takes effect on the next video frame. The video must be playing for the update to occur. You can use this method to perform simple animations on the bitmap. (If you need more sophisticated effects, consider writing a custom EVR mixer.)

To clear the bitmap, call IMFVideoMixerBitmap::ClearAlphaBitmap.

Controlling Substreams

The EVR can mix one or more video substreams onto the primary video stream. To control substream mixing, use the IMFVideoMixerControl interface.

Video Processor Settings

The EVR mixer uses DirectX Video Acceleration (DXVA) to perform video processing on the input streams. The exact processing capabilities depend on the graphics driver. Video processing capabilities are described by using the DXVA2_VideoProcessorCaps structure. A particular set of capabilities is called a video processing mode, each mode being identified by a GUID. For a list of predefined GUIDs, see IDirectXVideoProcessorService::GetVideoProcessorDeviceGuids. The driver might define additional vendor-specific GUIDs, representing different combinations of capabilities.

To find the supported modes and the capabilities of each mode, do the following:

  1. Call IMFGetService::GetService to get a pointer to the mixer's IMFVideoProcessor interface.

  2. Call IMFVideoProcessor::GetAvailableVideoProcessorModes. This method returns an array of GUIDs, which identify the available video processor modes. The list is returned in descending quality order, the mode with the highest quality appearing first in the list. The list can change depending on the format of the video.

  3. For each GUID in the list, call IMFVideoProcessor::GetVideoProcessorCaps to find the capabilities of the corresponding video processor mode. The method fills a DXVA2_VideoProcessorCaps structure with a description of the capabilities.

  4. To select a mode, call IMFVideoProcessor::SetVideoProcessorMode. Otherwise, the EVR automatically selects a mode when streaming begins. In that case, you can call IMFVideoProcessor::GetVideoProcessorMode to find which mode was selected.

Most of the fields in the DXVA2_VideoProcessorCaps structure describe low-level driver behavior and are not of interest in a typical application. The following fields are most likely to be of interest:

  • DeviceCaps. This field indicates whether video processing is performed in hardware or software, and whether the graphics driver is an older DXVA 1.0 driver.

  • DeinterlaceTechnology. This field provides some indication of what level of deinterlacing quality you can expect if the source video is interlaced.

  • ProcAmpControlCaps. This field specifies which color adjustment controls are available. For a list of possible color adjustments, see ProcAmp Settings. If the driver cannot perform color adjustment, this field is zero.

  • VideoProcessorOperations. This field contains flags that describe miscellaneous video processing capabilities. Two flags of particular importance are the DXVA2_VideoProcess_SubStreams flag and the DXVA2_VideoProcess_SubStreams flag. At least one of these flags must be present for the EVR to mix substreams onto the reference video stream. If neither flag is present, the EVR is limited to one video stream.

  • NoiseFilterTechnology. This field indicates which noise filters are supported by the graphics driver. If the driver does not support noise filtering, the value is DXVA2_NoiseFilterTech_Unsupported.

  • DetailFilterTechnology. This field indicates which detail filters are supported by the graphics driver. If the driver does not support noise filtering, the value is DXVA2_DetailFilterTech_Unsupported.

Color Adjustment and Image Filtering

The graphics driver might support color adjustment (also called process amplification or simply ProcAmp) and image filtering. When performed by the GPU, color adjustment and image filtering can be done in real time with no CPU overhead.

To use these features, perform the following steps:

  1. Select a video processing mode as described in the previous section.

  2. Call IMFVideoProcessor::GetVideoProcessorCaps to find the video processing capabilities as described in the previous section. The method fills in a DXVA2_VideoProcessorCaps structure which describes the capabilities, including whether the driver supports color adjustment and image filter.

  3. For each color adjustment that is supported by driver, call IMFVideoProcessor::GetProcAmpRange to find the possible range of value for that setting. This method also returns the default value for the setting. Call IMFVideoProcessor::GetProcAmpValues to find the current value of the settings. The values do not have specified units. It is up to the driver to define the range of values.

  4. Call IMFVideoProcessor::SetFilteringValue to set a color adjustment value.

  5. If the driver supports image filtering, then each filter type (noise and detail) supports three settings—level, radius, and threshold—in both chroma and luma. (See DXVA Image Filter Settings.) For each setting, call IMFVideoProcessor::GetFilteringRange to get the range of possible values and call IMFVideoProcessor::GetFilteringValue to get the current value.

  6. To change an image filter setting, call IMFVideoProcessor::SetFilteringValue.

Enhanced Video Renderer