Szerkesztés

Megosztás a következőn keresztül:


Custom video effects

This article describes how to create a Windows Runtime component that implements the IBasicVideoEffect interface to create custom effects for video streams. Custom effects can be used with several different Windows Runtime APIs including MediaCapture, which provides access to a device's camera, and MediaComposition, which allows you to create complex compositions out of media clips.

Add a custom effect to your app

A custom video effect is defined in a class that implements the IBasicVideoEffect interface. This class can't be included directly in your app's project. Instead, you must use a Windows Runtime component to host your video effect class.

Add a Windows Runtime component for your video effect

  1. In Microsoft Visual Studio, with your solution open, go to the File menu and select Add->New Project.
  2. Select the Windows Runtime Component (Universal Windows) project type.
  3. For this example, name the project VideoEffectComponent. This name will be referenced in code later.
  4. Click OK.
  5. The project template creates a class called Class1.cs. In Solution Explorer, right-click the icon for Class1.cs and select Rename.
  6. Rename the file to ExampleVideoEffect.cs. Visual Studio will show a prompt asking if you want to update all references to the new name. Click Yes.
  7. Open ExampleVideoEffect.cs and update the class definition to implement the IBasicVideoEffect interface.
public sealed class ExampleVideoEffect : IBasicVideoEffect

You need to include the following namespaces in your effect class file in order to access all of the types used in the examples in this article.

using Windows.Media.Effects;
using Windows.Media.MediaProperties;
using Windows.Foundation.Collections;
using Windows.Graphics.DirectX.Direct3D11;
using Windows.Graphics.Imaging;

Implement the IBasicVideoEffect interface using software processing

Your video effect must implement all of the methods and properties of the IBasicVideoEffect interface. This section walks you through a simple implementation of this interface that uses software processing.

Close method

The system will call the Close method on your class when the effect should shut down. You should use this method to dispose of any resources you have created. The argument to the method is a MediaEffectClosedReason that lets you know whether the effect was closed normally, if an error occurred, or if the effect does not support the required encoding format.

public void Close(MediaEffectClosedReason reason)
{
    // Dispose of effect resources
}

DiscardQueuedFrames method

The DiscardQueuedFrames method is called when your effect should reset. A typical scenario for this is if your effect stores previously processed frames to use in processing the current frame. When this method is called, you should dispose of the set of previous frames you saved. This method can be used to reset any state related to previous frames, not only accumulated video frames.

private int frameCount;
public void DiscardQueuedFrames()
{
    frameCount = 0;
}

IsReadOnly property

The IsReadOnly property lets the system know if your effect will write to the output of the effect. If your app does not modify the video frames (for example, an effect that only performs analysis of the video frames), you should set this property to true, which will cause the system to efficiently copy the frame input to the frame output for you.

Tip

When the IsReadOnly property is set to true, the system copies the input frame to the output frame before ProcessFrame is called. Setting the IsReadOnly property to true does not restrict you from writing to the effect's output frames in ProcessFrame.

public bool IsReadOnly { get { return false; } }

SetEncodingProperties method

The system calls SetEncodingProperties on your effect to let you know the encoding properties for the video stream upon which the effect is operating. This method also provides a reference to the Direct3D device used for hardware rendering. The usage of this device is shown in the hardware processing example later in this article.

private VideoEncodingProperties encodingProperties;
public void SetEncodingProperties(VideoEncodingProperties encodingProperties, IDirect3DDevice device)
{
    this.encodingProperties = encodingProperties;
}

SupportedEncodingProperties property

The system checks the SupportedEncodingProperties property to determine which encoding properties are supported by your effect. Note that if the consumer of your effect can't encode video using the properties you specify, it will call Close on your effect and will remove your effect from the video pipeline.

public IReadOnlyList<VideoEncodingProperties> SupportedEncodingProperties
{            
    get
    {
        var encodingProperties = new VideoEncodingProperties();
        encodingProperties.Subtype = "ARGB32";
        return new List<VideoEncodingProperties>() { encodingProperties };

        // If the list is empty, the encoding type will be ARGB32.
        // return new List<VideoEncodingProperties>();
    }
}

Note

If you return an empty list of VideoEncodingProperties objects from SupportedEncodingProperties, the system will default to ARGB32 encoding.

 

SupportedMemoryTypes property

The system checks the SupportedMemoryTypes property to determine whether your effect will access video frames in software memory or in hardware (GPU) memory. If you return MediaMemoryTypes.Cpu, your effect will be passed input and output frames that contain image data in SoftwareBitmap objects. If you return MediaMemoryTypes.Gpu, your effect will be passed input and output frames that contain image data in IDirect3DSurface objects.

public MediaMemoryTypes SupportedMemoryTypes { get { return MediaMemoryTypes.Cpu; } }

Note

If you specify MediaMemoryTypes.GpuAndCpu, the system will use either GPU or system memory, whichever is more efficient for the pipeline. When using this value, you must check in the ProcessFrame method to see whether the SoftwareBitmap or IDirect3DSurface passed into the method contains data, and then process the frame accordingly.

 

TimeIndependent property

The TimeIndependent property lets the system know if your effect does not require uniform timing. When set to true, the system can use optimizations that enhance effect performance.

public bool TimeIndependent { get { return true; } }

SetProperties method

The SetProperties method allows the app that is using your effect to adjust effect parameters. Properties are passed as an IPropertySet map of property names and values.

private IPropertySet configuration;
public void SetProperties(IPropertySet configuration)
{
    this.configuration = configuration;
}

This simple example will dim the pixels in each video frame according to a specified value. A property is declared and TryGetValue is used to get the value set by the calling app. If no value was set, a default value of .5 is used.

public double FadeValue
{
    get
    {
        object val;
        if (configuration != null && configuration.TryGetValue("FadeValue", out val))
        {
            return (double)val;
        }
        return .5;
    }
}

ProcessFrame method

The ProcessFrame method is where your effect modifies the image data of the video. The method is called once per frame and is passed a ProcessVideoFrameContext object. This object contains an input VideoFrame object that contains the incoming frame to be processed and an output VideoFrame object to which you write image data that will be passed on to rest of the video pipeline. Each of these VideoFrame objects has a SoftwareBitmap property and a Direct3DSurface property, but which of these can be used is determined by the value you returned from the SupportedMemoryTypes property.

This example shows a simple implementation of the ProcessFrame method using software processing. For more information about working with SoftwareBitmap objects, see Imaging. An example ProcessFrame implementation using hardware processing is shown later in this article.

Accessing the data buffer of a SoftwareBitmap requires COM interop, so you should include the System.Runtime.InteropServices namespace in your effect class file.

using System.Runtime.InteropServices;

Add the following code inside the namespace for your effect to import the interface for accessing the image buffer.

[ComImport]
[Guid("5B0D3235-4DBA-4D44-865E-8F1D0E4FD04D")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
unsafe interface IMemoryBufferByteAccess
{
    void GetBuffer(out byte* buffer, out uint capacity);
}

Note

Because this technique accesses a native, unmanaged image buffer, you will need to configure your project to allow unsafe code.

  1. In Solution Explorer, right-click the VideoEffectComponent project and select Properties.
  2. Select the Build tab.
  3. Select the Allow unsafe code check box.

 

Now you can add the ProcessFrame method implementation. First, this method obtains a BitmapBuffer object from both the input and output software bitmaps. Note that the output frame is opened for writing and the input for reading. Next, an IMemoryBufferReference is obtained for each buffer by calling CreateReference. Then, the actual data buffer is obtained by casting the IMemoryBufferReference objects as the COM interop interface defined above, IMemoryByteAccess, and then calling GetBuffer.

Now that the data buffers have been obtained, you can read from the input buffer and write to the output buffer. The layout of the buffer is obtained by calling GetPlaneDescription, which provides information on the width, stride, and initial offset of the buffer. The bits per pixel is determined by the encoding properties set previously with the SetEncodingProperties method. The buffer format information is used to find the index into the buffer for each pixel. The pixel value from the source buffer is copied into the target buffer, with the color values being multiplied by the FadeValue property defined for this effect to dim them by the specified amount.

public unsafe void ProcessFrame(ProcessVideoFrameContext context)
{
    using (BitmapBuffer buffer = context.InputFrame.SoftwareBitmap.LockBuffer(BitmapBufferAccessMode.Read))
    using (BitmapBuffer targetBuffer = context.OutputFrame.SoftwareBitmap.LockBuffer(BitmapBufferAccessMode.Write))
    {
        using (var reference = buffer.CreateReference())
        using (var targetReference = targetBuffer.CreateReference())
        {
            byte* dataInBytes;
            uint capacity;
            ((IMemoryBufferByteAccess)reference).GetBuffer(out dataInBytes, out capacity);

            byte* targetDataInBytes;
            uint targetCapacity;
            ((IMemoryBufferByteAccess)targetReference).GetBuffer(out targetDataInBytes, out targetCapacity);

            var fadeValue = FadeValue;

            // Fill-in the BGRA plane
            BitmapPlaneDescription bufferLayout = buffer.GetPlaneDescription(0);
            for (int i = 0; i < bufferLayout.Height; i++)
            {
                for (int j = 0; j < bufferLayout.Width; j++)
                {

                    byte value = (byte)((float)j / bufferLayout.Width * 255);

                    int bytesPerPixel = 4; 
                    if (encodingProperties.Subtype != "ARGB32")
                    {
                        // If you support other encodings, adjust index into the buffer accordingly
                    }
                    

                    int idx = bufferLayout.StartIndex + bufferLayout.Stride * i + bytesPerPixel * j;

                    targetDataInBytes[idx + 0] = (byte)(fadeValue * (float)dataInBytes[idx + 0]);
                    targetDataInBytes[idx + 1] = (byte)(fadeValue * (float)dataInBytes[idx + 1]);
                    targetDataInBytes[idx + 2] = (byte)(fadeValue * (float)dataInBytes[idx + 2]);
                    targetDataInBytes[idx + 3] = dataInBytes[idx + 3];
                }
            }
        }
    }
}

Implement the IBasicVideoEffect interface using hardware processing

Creating a custom video effect by using hardware (GPU) processing is almost identical to using software processing as described above. This section will show the few differences in an effect that uses hardware processing. This example uses the Win2D Windows Runtime API. For more information about using Win2D, see the Win2D documentation.

Use the following steps to add the Win2D NuGet package to the project you created as described in the Add a custom effect to your app section at the beginning of this article.

To add the Win2D NuGet package to your effect project

  1. In Solution Explorer, right-click the VideoEffectComponent project and select Manage NuGet Packages.
  2. At the top of the window, select the Browse tab.
  3. In the search box, enter Win2D.
  4. Select Win2D.uwp, and then select Install in the right pane.
  5. The Review Changes dialog shows you the package to be installed. Click OK.
  6. Accept the package license.

In addition to the namespaces included in the basic project setup, you will need to include the following namespaces provided by Win2D.

using Microsoft.Graphics.Canvas.Effects;
using Microsoft.Graphics.Canvas;

Because this effect will use GPU memory for operating on the image data, you should return MediaMemoryTypes.Gpu from the SupportedMemoryTypes property.

public MediaMemoryTypes SupportedMemoryTypes { get { return MediaMemoryTypes.Gpu; } }

Set the encoding properties that your effect will support with the SupportedEncodingProperties property. When working with Win2D, you must use ARGB32 encoding.

public IReadOnlyList<VideoEncodingProperties> SupportedEncodingProperties {
    get
    {
        var encodingProperties = new VideoEncodingProperties();
        encodingProperties.Subtype = "ARGB32";
        return new List<VideoEncodingProperties>() { encodingProperties };
    }
}

Use the SetEncodingProperties method to create a new Win2D CanvasDevice object from the IDirect3DDevice passed into the method.

private CanvasDevice canvasDevice;
public void SetEncodingProperties(VideoEncodingProperties encodingProperties, IDirect3DDevice device)
{
    canvasDevice = CanvasDevice.CreateFromDirect3D11Device(device);
}

The SetProperties implementation is identical to the previous software processing example. This example uses a BlurAmount property to configure a Win2D blur effect.

private IPropertySet configuration;
public void SetProperties(IPropertySet configuration)
{
    this.configuration = configuration;
}
public double BlurAmount
{
    get
    {
        object val;
        if (configuration != null && configuration.TryGetValue("BlurAmount", out val))
        {
            return (double)val;
        }
        return 3;
    }
}

The last step is to implement the ProcessFrame method that actually processes the image data.

Using Win2D APIs, a CanvasBitmap is created from the input frame's Direct3DSurface property. A CanvasRenderTarget is created from the output frame's Direct3DSurface and a CanvasDrawingSession is created from this render target. A new Win2D GaussianBlurEffect is initialized, using the BlurAmount property our effect exposes via SetProperties. Finally, the CanvasDrawingSession.DrawImage method is called to draw the input bitmap to the render target using the blur effect.

public void ProcessFrame(ProcessVideoFrameContext context)
{

    using (CanvasBitmap inputBitmap = CanvasBitmap.CreateFromDirect3D11Surface(canvasDevice, context.InputFrame.Direct3DSurface))
    using (CanvasRenderTarget renderTarget = CanvasRenderTarget.CreateFromDirect3D11Surface(canvasDevice, context.OutputFrame.Direct3DSurface))
    using (CanvasDrawingSession ds = renderTarget.CreateDrawingSession())
    {


        var gaussianBlurEffect = new GaussianBlurEffect
        {
            Source = inputBitmap,
            BlurAmount = (float)BlurAmount,
            Optimization = EffectOptimization.Speed
        };

        ds.DrawImage(gaussianBlurEffect);

    }
}

Adding your custom effect to your app

To use your video effect from your app, you must add a reference to the effect project to your app.

  1. In Solution Explorer, under your app project, right-click References and select Add reference.
  2. Expand the Projects tab, select Solution, and then select the check box for your effect project name. For this example, the name is VideoEffectComponent.
  3. Click OK.

Add your custom effect to a camera video stream

You can set up a simple preview stream from the camera by following the steps in the article Simple camera preview access. Following those steps will provide you with an initialized MediaCapture object that is used to access the camera's video stream.

To add your custom video effect to a camera stream, first create a new VideoEffectDefinition object, passing in the namespace and class name for your effect. Next, call the MediaCapture object's AddVideoEffect method to add your effect to the specified stream. This example uses the MediaStreamType.VideoPreview value to specify that the effect should be added to the preview stream. If your app supports video capture, you could also use MediaStreamType.VideoRecord to add the effect to the capture stream. AddVideoEffect returns an IMediaExtension object representing your custom effect. You can use the SetProperties method to set the configuration for your effect.

After the effect has been added, StartPreviewAsync is called to start the preview stream.

var videoEffectDefinition = new VideoEffectDefinition("VideoEffectComponent.ExampleVideoEffect");

IMediaExtension videoEffect =
   await mediaCapture.AddVideoEffectAsync(videoEffectDefinition, MediaStreamType.VideoPreview);

videoEffect.SetProperties(new PropertySet() { { "FadeValue", .25 } });

await mediaCapture.StartPreviewAsync();

Add your custom effect to a clip in a MediaComposition

For general guidance for creating media compositions from video clips, see Media compositions and editing. The following code snippet shows the creation of a simple media composition that uses a custom video effect. A MediaClip object is created by calling CreateFromFileAsync, passing in a video file that was selected by the user with a FileOpenPicker, and the clip is added to a new MediaComposition. Next a new VideoEffectDefinition object is created, passing in the namespace and class name for your effect to the constructor. Finally, the effect definition is added to the VideoEffectDefinitions collection of the MediaClip object.

MediaComposition composition = new MediaComposition();
var clip = await MediaClip.CreateFromFileAsync(pickedFile);
composition.Clips.Add(clip);

var videoEffectDefinition = new VideoEffectDefinition("VideoEffectComponent.ExampleVideoEffect", new PropertySet() { { "FadeValue", .5 } });

clip.VideoEffectDefinitions.Add(videoEffectDefinition);