共用方式為


MediaCapture 的場景分析

本文說明如何使用 SceneAnalysisEffectFaceDetectionEffect 來分析媒體擷取預覽串流的內容。

場景分析效應

SceneAnalysisEffect 分析媒體擷取預覽串流中的影片影格,並建議處理選項以改善擷取效果。 目前,該效果支援偵測使用高動態範圍(HDR)處理是否能改善拍攝效果。

如果效果建議使用 HDR,你可以用以下方式:

初始化場景分析效果並加入預覽串流

視訊特效是透過兩個 API 實作的,一個是效果定義,提供擷取裝置初始化效果所需的設定;另一個是一個效果實例,可以用來控制該特效。 由於你可能想從程式碼中的多個位置存取 Effect 實例,通常應該宣告一個成員變數來存放該物件。

private SceneAnalysisEffect m_sceneAnalysisEffect;

在你的應用程式中,初始化 MediaCapture 物件後,建立一個新的 SceneAnalysisEffectDefinition 實例。

透過在 MediaCapture 物件上呼叫 AddVideoEffectAsync,提供 SceneAnalysisEffectDefinition 並指定 MediaStreamType.VideoPreview,指示該效果應套用在影片預覽串流,而非擷取串流,向擷取裝置註冊該效果。 AddVideoEffectAsync 會回傳新增效果的實例。 由於此方法可用於多種效果類型,您必須將回傳的實例投射到 SceneAnalysisEffect 物件中。

要接收場景分析結果,您必須註冊 SceneAnalyzed 事件的處理器。

目前,場景分析效果僅包含高動態範圍分析器。 透過將效果的 HighDynamicRangeControl.Enabled 設定為 true,啟用 HDR 分析。

using Microsoft.UI.Xaml.Controls;
using Microsoft.UI.Xaml;
using System;
using Windows.Media.Devices;
using System.Linq;
using Microsoft.UI.Xaml.Input;
using System.Threading.Tasks;
using Windows.Foundation;
using Windows.Media.MediaProperties;
using Windows.Graphics.Display;
using Windows.Media.Capture;
using System.Collections.Generic;
using Windows.Media.Capture.Frames;
using Windows.Media.Core;
using Windows.Media.Effects;
using Windows.Media;
using Windows.UI.Core;

//using MyVideoEffect;
using Windows.Graphics.Imaging;

namespace CameraWinUI
{
    public sealed partial class MainWindow : Window
    {

        #region Basic add/remove

        IVideoEffectDefinition myEffectDefinition;
        IMediaExtension myPreviewEffect;
        IMediaExtension myRecordEffect;

        private async void bBasicAddEffect_Click(object sender, RoutedEventArgs e)
        {
            

            myEffectDefinition = new VideoEffectDefinition("MyVideoEffect.ExampleVideoEffect");

            // <SnippetBasicAddEffect>
            if (m_mediaCapture.MediaCaptureSettings.VideoDeviceCharacteristic == VideoDeviceCharacteristic.AllStreamsIdentical ||
                m_mediaCapture.MediaCaptureSettings.VideoDeviceCharacteristic == VideoDeviceCharacteristic.PreviewRecordStreamsIdentical)
            {
                // This effect will modify both the preview and the record streams, because they are the same stream.
                myRecordEffect = await m_mediaCapture.AddVideoEffectAsync(myEffectDefinition, MediaStreamType.VideoRecord);
            }
            else
            {
                myRecordEffect = await m_mediaCapture.AddVideoEffectAsync(myEffectDefinition, MediaStreamType.VideoRecord);
                myPreviewEffect = await m_mediaCapture.AddVideoEffectAsync(myEffectDefinition, MediaStreamType.VideoPreview);
            }
            // </SnippetBasicAddEffect>
        }
        public async void RemoveOneEffect()
        {
            // <SnippetRemoveOneEffect>
            if (myRecordEffect != null)
            {
                await m_mediaCapture.RemoveEffectAsync(myRecordEffect);
            }
            if (myPreviewEffect != null)
            {
                await m_mediaCapture.RemoveEffectAsync(myPreviewEffect);
            }
            // </SnippetRemoveOneEffect>
        }
        public async void RemoveAllEffects()
        {
            // <SnippetClearAllEffects>
            await m_mediaCapture.ClearEffectsAsync(MediaStreamType.VideoPreview);
            await m_mediaCapture.ClearEffectsAsync(MediaStreamType.VideoRecord);
            // </SnippetClearAllEffects>
        }

        #endregion

        #region Video stabilization effect

        

        // <SnippetDeclareVideoStabilizationEffect>
        // 
        private VideoStabilizationEffect m_videoStabilizationEffect;
        private VideoEncodingProperties m_inputPropertiesBackup;
        private VideoEncodingProperties m_outputPropertiesBackup;
        private MediaEncodingProfile m_encodingProfile;
        // </SnippetDeclareVideoStabilizationEffect>


        private async void bSetupVideoStabilizationEffect_Click(object sender, RoutedEventArgs e)
        {

            // <SnippetEncodingProfileMember>
            m_encodingProfile = MediaEncodingProfile.CreateMp4(VideoEncodingQuality.Auto);
            // </SnippetEncodingProfileMember>

            // <SnippetCreateVideoStabilizationEffect>
            // Create the effect definition
            VideoStabilizationEffectDefinition stabilizerDefinition = new VideoStabilizationEffectDefinition();

            // Add the video stabilization effect to media capture
            m_videoStabilizationEffect =
                (VideoStabilizationEffect)await m_mediaCapture.AddVideoEffectAsync(stabilizerDefinition, MediaStreamType.VideoRecord);

            m_videoStabilizationEffect.EnabledChanged += VideoStabilizationEffect_EnabledChanged;

            await SetUpVideoStabilizationRecommendationAsync();

            m_videoStabilizationEffect.Enabled = true;
            // </SnippetCreateVideoStabilizationEffect>

            

        }
        // <SnippetSetUpVideoStabilizationRecommendationAsync>
        private async Task SetUpVideoStabilizationRecommendationAsync()
        {

            // Get the recommendation from the effect based on our current input and output configuration
            var recommendation = m_videoStabilizationEffect.GetRecommendedStreamConfiguration(m_mediaCapture.VideoDeviceController, m_encodingProfile.Video);

            // Handle the recommendation for the input into the effect, which can contain a larger resolution than currently configured, so cropping is minimized
            if (recommendation.InputProperties != null)
            {
                // Back up the current input properties from before VS was activated
                m_inputPropertiesBackup = m_mediaCapture.VideoDeviceController.GetMediaStreamProperties(MediaStreamType.VideoRecord) as VideoEncodingProperties;

                // Set the recommendation from the effect (a resolution higher than the current one to allow for cropping) on the input
                await m_mediaCapture.VideoDeviceController.SetMediaStreamPropertiesAsync(MediaStreamType.VideoRecord, recommendation.InputProperties);
                await m_mediaCapture.VideoDeviceController.SetMediaStreamPropertiesAsync(MediaStreamType.VideoPreview, recommendation.InputProperties);
            }

            // Handle the recommendations for the output from the effect
            if (recommendation.OutputProperties != null)
            {
                // Back up the current output properties from before VS was activated
                m_outputPropertiesBackup = m_encodingProfile.Video;

                // Apply the recommended encoding profile for the output
                m_encodingProfile.Video = recommendation.OutputProperties;
            }
        }
        // </SnippetSetUpVideoStabilizationRecommendationAsync>
        // <SnippetVideoStabilizationEnabledChanged>
        private async void VideoStabilizationEffect_EnabledChanged(VideoStabilizationEffect sender, VideoStabilizationEffectEnabledChangedEventArgs args)
        {
            await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
            {
                // Update your UI to reflect the change in status
                tbStatus.Text = "video stabilization status: " + sender.Enabled + ". Reason: " + args.Reason;
            });
        }
        // </SnippetVideoStabilizationEnabledChanged>
        private async void bCleanupVideoStabilizationEffect_Click(object sender, RoutedEventArgs e)
        {
            // <SnippetCleanUpVisualStabilizationEffect>
            // Clear all effects in the pipeline
            await m_mediaCapture.RemoveEffectAsync(m_videoStabilizationEffect);

            // If backed up settings (stream properties and encoding profile) exist, restore them and clear the backups
            if (m_inputPropertiesBackup != null)
            {
                await m_mediaCapture.VideoDeviceController.SetMediaStreamPropertiesAsync(MediaStreamType.VideoRecord, m_inputPropertiesBackup);
                m_inputPropertiesBackup = null;
            }

            if (m_outputPropertiesBackup != null)
            {
                m_encodingProfile.Video = m_outputPropertiesBackup;
                m_outputPropertiesBackup = null;
            }

            m_videoStabilizationEffect.EnabledChanged -= VideoStabilizationEffect_EnabledChanged;

            m_videoStabilizationEffect = null;
            // </SnippetCleanUpVisualStabilizationEffect>
        }

        #endregion Video stabilization effect

        #region scene analyis effect
        // <SnippetDeclareSceneAnalysisEffect>
        private SceneAnalysisEffect m_sceneAnalysisEffect;
        // </SnippetDeclareSceneAnalysisEffect>

        private async void bCreateSceneAnalysisEffect_Click(object sender, RoutedEventArgs e)
        {
            // <SnippetCreateSceneAnalysisEffectAsync>
            // Create the definition
            var definition = new SceneAnalysisEffectDefinition();

            // Add the effect to the video record stream
            m_sceneAnalysisEffect = (SceneAnalysisEffect)await m_mediaCapture.AddVideoEffectAsync(definition, MediaStreamType.VideoPreview);

            // Subscribe to notifications about scene information
            m_sceneAnalysisEffect.SceneAnalyzed += SceneAnalysisEffect_SceneAnalyzed;

            // Enable HDR analysis
            m_sceneAnalysisEffect.HighDynamicRangeAnalyzer.Enabled = true;
            // </SnippetCreateSceneAnalysisEffectAsync>

        }

        double MyCertaintyCap = .5;
        // <SnippetSceneAnalyzed>
        private void SceneAnalysisEffect_SceneAnalyzed(SceneAnalysisEffect sender, SceneAnalyzedEventArgs args)
        {
            double hdrCertainty = args.ResultFrame.HighDynamicRange.Certainty;

            // Certainty value is between 0.0 and 1.0
            if (hdrCertainty > MyCertaintyCap)
            {
                DispatcherQueue.TryEnqueue(() =>
                {
                    tbStatus.Text = "Enabling HDR capture is recommended.";
                });
            }
        }
        // </SnippetSceneAnalyzed>

        private async void bCleanupSceneAnalysisEffect_Click(object sender, RoutedEventArgs e)
        {
            // <SnippetCleanUpSceneAnalysisEffectAsync>
            // Disable detection
            m_sceneAnalysisEffect.HighDynamicRangeAnalyzer.Enabled = false;

            m_sceneAnalysisEffect.SceneAnalyzed -= SceneAnalysisEffect_SceneAnalyzed;

            // Remove the effect from the preview stream
            await m_mediaCapture.ClearEffectsAsync(MediaStreamType.VideoPreview);

            // Clear the member variable that held the effect instance
            m_sceneAnalysisEffect = null;
            // </SnippetCleanUpSceneAnalysisEffectAsync>
        }

        #endregion scene analyis effect


        #region Face detection

        // <SnippetDeclareFaceDetectionEffect>
        FaceDetectionEffect m_faceDetectionEffect;
        // </SnippetDeclareFaceDetectionEffect>





        private async void bCreateFaceDetectionEffect_Click(object sender, RoutedEventArgs e)
        {
            // <SnippetCreateFaceDetectionEffectAsync>

            // Create the definition, which will contain some initialization settings
            var definition = new FaceDetectionEffectDefinition();

            // To ensure preview smoothness, do not delay incoming samples
            definition.SynchronousDetectionEnabled = false;

            // In this scenario, choose detection speed over accuracy
            definition.DetectionMode = FaceDetectionMode.HighPerformance;

            // Add the effect to the preview stream
            m_faceDetectionEffect = (FaceDetectionEffect)await m_mediaCapture.AddVideoEffectAsync(definition, MediaStreamType.VideoPreview);

            // Choose the shortest interval between detection events
            m_faceDetectionEffect.DesiredDetectionInterval = TimeSpan.FromMilliseconds(33);

            // Start detecting faces
            m_faceDetectionEffect.Enabled = true;

            // </SnippetCreateFaceDetectionEffectAsync>


            // <SnippetRegisterFaceDetectionHandler>
            // Register for face detection events
            m_faceDetectionEffect.FaceDetected += FaceDetectionEffect_FaceDetected;
            // </SnippetRegisterFaceDetectionHandler>


            // <SnippetAreFaceFocusAndExposureSupported>
            var regionsControl = m_mediaCapture.VideoDeviceController.RegionsOfInterestControl;
            bool faceDetectionFocusAndExposureSupported =
                regionsControl.MaxRegions > 0 &&
                (regionsControl.AutoExposureSupported || regionsControl.AutoFocusSupported);
            // </SnippetAreFaceFocusAndExposureSupported>
        }

        private async void bCleanipFaceDetectionEffect_Click(object sender, RoutedEventArgs e)
        {
            // <SnippetCleanUpFaceDetectionEffectAsync>
            // Disable detection
            m_faceDetectionEffect.Enabled = false;

            // Unregister the event handler
            m_faceDetectionEffect.FaceDetected -= FaceDetectionEffect_FaceDetected;

            // Remove the effect from the preview stream
            await m_mediaCapture.ClearEffectsAsync(MediaStreamType.VideoPreview);

            // Clear the member variable that held the effect instance
            m_faceDetectionEffect = null;
            // </SnippetCleanUpFaceDetectionEffectAsync>
        }



        // <SnippetFaceDetected>
        private void FaceDetectionEffect_FaceDetected(FaceDetectionEffect sender, FaceDetectedEventArgs args)
        {
            foreach (Windows.Media.FaceAnalysis.DetectedFace face in args.ResultFrame.DetectedFaces)
            {
                BitmapBounds faceRect = face.FaceBox;

                // Draw a rectangle on the preview stream for each face
            }
        }
        // </SnippetFaceDetected>

        #endregion Face detection


    }
}

實作 SceneAnalyzed 事件處理器

場景分析結果會回傳在 SceneAnalyzed 事件處理器中。 傳入處理器的 SceneAnalyzedEventArgs 物件包含一個 SceneAnalysisEffectFrame 物件,該物件擁有 一個 HighDynamicRangeOutput 物件。 高動態範圍輸出的 確定 性特性提供介於0到1.0之間的值,0表示HDR處理無法改善捕捉效果,1.0表示HDR處理有助於。 你可以決定使用 HDR 的門檻點,或是把結果顯示給使用者,讓使用者自己決定。

private void SceneAnalysisEffect_SceneAnalyzed(SceneAnalysisEffect sender, SceneAnalyzedEventArgs args)
{
    double hdrCertainty = args.ResultFrame.HighDynamicRange.Certainty;

    // Certainty value is between 0.0 and 1.0
    if (hdrCertainty > MyCertaintyCap)
    {
        DispatcherQueue.TryEnqueue(() =>
        {
            tbStatus.Text = "Enabling HDR capture is recommended.";
        });
    }
}

傳送到處理器的 HighDynamicRangeOutput 物件也有一個 FrameControllers 屬性,該屬性包含建議的影格控制器,用於捕捉可變照片序列以進行 HDR 處理。 更多資訊請參見 變數照片序列

清理場景分析效果

當你的應用程式完成擷取後,在釋放 MediaCapture 物件之前,你應該將效果的 HighDynamicRangeAnalyzer.Enabled 屬性設為 false,並取消註冊你的 SceneAnalyzed 事件處理程序。 呼叫MediaCapture.ClearEffectsAsync,指定影片預覽串流,這是因為該效果是加入到這個串流上的。 最後,將你的成員變數設為 null。

// Disable detection
m_sceneAnalysisEffect.HighDynamicRangeAnalyzer.Enabled = false;

m_sceneAnalysisEffect.SceneAnalyzed -= SceneAnalysisEffect_SceneAnalyzed;

// Remove the effect from the preview stream
await m_mediaCapture.ClearEffectsAsync(MediaStreamType.VideoPreview);

// Clear the member variable that held the effect instance
m_sceneAnalysisEffect = null;

臉部偵測效果

FaceDetectionEffect 用來識別媒體擷取預覽串流中臉部的位置。 這個效果讓你在預覽串流中偵測到臉部時會收到通知,並為預覽幀內每個偵測到的臉提供包圍框。 在支援的裝置上,臉部偵測效果也能提升對場景中最重要的臉部曝光與焦點。

初始化臉部偵測效果,並將其加入預覽串流

視訊特效是透過兩個 API 實作的,一個是效果定義,提供擷取裝置初始化效果所需的設定;另一個是一個效果實例,可以用來控制該特效。 由於你可能想從程式碼中的多個位置存取 Effect 實例,通常應該宣告一個成員變數來存放該物件。

FaceDetectionEffect m_faceDetectionEffect;

在你的應用程式中,初始化 MediaCapture 物件後,建立一個新的 FaceDetectionEffectDefinition 實例。 設定 DetectionMode 屬性以優先快速或更精確的臉部偵測。 設定 SynchronousDetectionEnabled 以指定輸入影格不會因等待臉部偵測完成而延遲,否則可能導致預覽體驗不穩定。

透過在 MediaCapture 物件上呼叫 AddVideoEffectAsync,提供 FaceDetectionEffectDefinition 並指定 MediaStreamType.VideoPreview,以指示該效果應套用在影片預覽串流,而非擷取串流,向擷取裝置註冊該效果。 AddVideoEffectAsync 會回傳新增效果的實例。 由於此方法可用於多種效果類型,您必須將回傳的實例投射到 FaceDetectionEffect 物件中。

啟用或停用該效果時,請設定 FaceDetectionEffect.Enabled 屬性。 透過設定 FaceDetectionEffect.DesiredDetectionInterval 屬性,調整效果分析影格的頻率。 這兩種屬性都可以在媒體擷取進行中調整。


// Create the definition, which will contain some initialization settings
var definition = new FaceDetectionEffectDefinition();

// To ensure preview smoothness, do not delay incoming samples
definition.SynchronousDetectionEnabled = false;

// In this scenario, choose detection speed over accuracy
definition.DetectionMode = FaceDetectionMode.HighPerformance;

// Add the effect to the preview stream
m_faceDetectionEffect = (FaceDetectionEffect)await m_mediaCapture.AddVideoEffectAsync(definition, MediaStreamType.VideoPreview);

// Choose the shortest interval between detection events
m_faceDetectionEffect.DesiredDetectionInterval = TimeSpan.FromMilliseconds(33);

// Start detecting faces
m_faceDetectionEffect.Enabled = true;

當偵測到臉部時,會收到通知

如果你想在偵測到人臉時執行某些動作,例如在視頻預覽中為偵測到的人臉畫上框框,你可以註冊 FaceDetected 事件。

// Register for face detection events
m_faceDetectionEffect.FaceDetected += FaceDetectionEffect_FaceDetected;

在事件處理程序中,你可以透過存取 FaceDetectedEventArgsFaceDetectionEffectFrame.DetectedFaces 屬性,取得該框架中所有偵測到的人臉清單。 FaceBox 屬性是一個 BitmapBounds 結構,描述包含偵測到臉部的矩形,該矩形以單位相對於預覽串流的尺寸表示。 若要查看將預覽串流座標轉換為螢幕座標的範例程式碼,請參閱 臉部偵測 UWP 範例

private void FaceDetectionEffect_FaceDetected(FaceDetectionEffect sender, FaceDetectedEventArgs args)
{
    foreach (Windows.Media.FaceAnalysis.DetectedFace face in args.ResultFrame.DetectedFaces)
    {
        BitmapBounds faceRect = face.FaceBox;

        // Draw a rectangle on the preview stream for each face
    }
}

清理臉部偵測效果

當你的應用程式完成擷取後,在處理 MediaCapture 物件之前,你應該用 FaceDetectionEffect.Enabled 關閉臉部偵測效果,並且如果你之前註冊過 FaceDetected 事件處理程式,請取消註冊。 呼叫 MediaCapture.ClearEffectsAsync,指定影片預覽串流,因為效果是被加入到該串流中的。 最後,將你的成員變數設為 null。

// Disable detection
m_faceDetectionEffect.Enabled = false;

// Unregister the event handler
m_faceDetectionEffect.FaceDetected -= FaceDetectionEffect_FaceDetected;

// Remove the effect from the preview stream
await m_mediaCapture.ClearEffectsAsync(MediaStreamType.VideoPreview);

// Clear the member variable that held the effect instance
m_faceDetectionEffect = null;

檢查偵測到的臉孔是否有對焦和曝光支援

並非所有裝置都有能根據偵測到的臉部調整對焦和曝光的捕捉裝置。 由於臉部偵測會消耗裝置資源,你可能只想在能使用臉部偵測功能來增強擷取的裝置上啟用。 要確認是否有基於臉部的擷取優化,先取得已初始化的 MediaCaptureVideoDeviceController,然後取得視訊裝置控制器的 RegionsOfInterestControl。 確認 MaxRegions 是否至少支援一個區域。 然後再確認 AutoExposureSupportedAutoFocus Supported 是否成立。 若符合這些條件,裝置即可利用臉部偵測來增強擷取效果。

var regionsControl = m_mediaCapture.VideoDeviceController.RegionsOfInterestControl;
bool faceDetectionFocusAndExposureSupported =
    regionsControl.MaxRegions > 0 &&
    (regionsControl.AutoExposureSupported || regionsControl.AutoFocusSupported);