Detect faces in images or videos
This topic shows how to use the FaceDetector to detect faces in an image. The FaceTracker is optimized for tracking faces over time in a sequence of video frames.
For an alternative method of tracking faces using the FaceDetectionEffect, see Scene analysis for media capture.
The code in this article was adapted from the Basic Face Detection and Basic Face Tracking samples. You can download these samples to see the code used in context or to use the sample as a starting point for your own app.
Detect faces in a single image
The FaceDetector class allows you to detect one or more faces in a still image.
This example uses APIs from the following namespaces.
using Windows.Storage;
using Windows.Storage.Pickers;
using Windows.Storage.Streams;
using Windows.Graphics.Imaging;
using Windows.Media.FaceAnalysis;
using Windows.UI.Xaml.Media.Imaging;
using Windows.UI.Xaml.Shapes;
Declare a class member variable for the FaceDetector object and for the list of DetectedFace objects that will be detected in the image.
FaceDetector faceDetector;
IList<DetectedFace> detectedFaces;
Face detection operates on a SoftwareBitmap object which can be created in a variety of ways. In this example a FileOpenPicker is used to allow the user to pick an image file in which faces will be detected. For more information about working with software bitmaps, see Imaging.
FileOpenPicker photoPicker = new FileOpenPicker();
photoPicker.ViewMode = PickerViewMode.Thumbnail;
photoPicker.SuggestedStartLocation = PickerLocationId.PicturesLibrary;
photoPicker.FileTypeFilter.Add(".jpg");
photoPicker.FileTypeFilter.Add(".jpeg");
photoPicker.FileTypeFilter.Add(".png");
photoPicker.FileTypeFilter.Add(".bmp");
StorageFile photoFile = await photoPicker.PickSingleFileAsync();
if (photoFile == null)
{
return;
}
Use the BitmapDecoder class to decode the image file into a SoftwareBitmap. The face detection process is quicker with a smaller image and so you may want to scale the source image down to a smaller size. This can be performed during decoding by creating a BitmapTransform object, setting the ScaledWidth and ScaledHeight properties and passing it into the call to GetSoftwareBitmapAsync, which returns the decoded and scaled SoftwareBitmap.
IRandomAccessStream fileStream = await photoFile.OpenAsync(FileAccessMode.Read);
BitmapDecoder decoder = await BitmapDecoder.CreateAsync(fileStream);
BitmapTransform transform = new BitmapTransform();
const float sourceImageHeightLimit = 1280;
if (decoder.PixelHeight > sourceImageHeightLimit)
{
float scalingFactor = (float)sourceImageHeightLimit / (float)decoder.PixelHeight;
transform.ScaledWidth = (uint)Math.Floor(decoder.PixelWidth * scalingFactor);
transform.ScaledHeight = (uint)Math.Floor(decoder.PixelHeight * scalingFactor);
}
SoftwareBitmap sourceBitmap = await decoder.GetSoftwareBitmapAsync(decoder.BitmapPixelFormat, BitmapAlphaMode.Premultiplied, transform, ExifOrientationMode.IgnoreExifOrientation, ColorManagementMode.DoNotColorManage);
In the current version, the FaceDetector class only supports images in Gray8 or Nv12. The SoftwareBitmap class provides the Convert method, which converts a bitmap from one format to another. This example converts the source image into the Gray8 pixel format if it is not already in that format. If you want, you can use the GetSupportedBitmapPixelFormats and IsBitmapPixelFormatSupported methods to determine at runtime if a pixel format is supported, in case the set of supported formats is expanded in future versions.
// Use FaceDetector.GetSupportedBitmapPixelFormats and IsBitmapPixelFormatSupported to dynamically
// determine supported formats
const BitmapPixelFormat faceDetectionPixelFormat = BitmapPixelFormat.Gray8;
SoftwareBitmap convertedBitmap;
if (sourceBitmap.BitmapPixelFormat != faceDetectionPixelFormat)
{
convertedBitmap = SoftwareBitmap.Convert(sourceBitmap, faceDetectionPixelFormat);
}
else
{
convertedBitmap = sourceBitmap;
}
Instantiate the FaceDetector object by calling CreateAsync and then calling DetectFacesAsync, passing in the bitmap that has been scaled to a reasonable size and converted to a supported pixel format. This method returns a list of DetectedFace objects. ShowDetectedFaces is a helper method, shown below, that draws squares around the faces in the image.
if (faceDetector == null)
{
faceDetector = await FaceDetector.CreateAsync();
}
detectedFaces = await faceDetector.DetectFacesAsync(convertedBitmap);
ShowDetectedFaces(sourceBitmap, detectedFaces);
Be sure to dispose of the objects that were created during the face detection process.
sourceBitmap.Dispose();
fileStream.Dispose();
convertedBitmap.Dispose();
To display the image and draw boxes around the detected faces, add a Canvas element to your XAML page.
<Canvas x:Name="VisualizationCanvas" Visibility="Visible" Grid.Row="0" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"/>
Define some member variables to style the squares that will be drawn.
private readonly SolidColorBrush lineBrush = new SolidColorBrush(Windows.UI.Colors.Yellow);
private readonly double lineThickness = 2.0;
private readonly SolidColorBrush fillBrush = new SolidColorBrush(Windows.UI.Colors.Transparent);
In the ShowDetectedFaces helper method, a new ImageBrush is created and the source is set to a SoftwareBitmapSource created from the SoftwareBitmap representing the source image. The background of the XAML Canvas control is set to the image brush.
If the list of faces passed into the helper method isn't empty, loop through each face in the list and use the FaceBox property of the DetectedFace class to determine the position and size of the rectangle within the image that contains the face. Because the Canvas control is very likely to be a different size than the source image, you should multiply both the X and Y coordinates and the width and height of the FaceBox by a scaling value that is the ratio of the source image size to the actual size of the Canvas control.
private async void ShowDetectedFaces(SoftwareBitmap sourceBitmap, IList<DetectedFace> faces)
{
ImageBrush brush = new ImageBrush();
SoftwareBitmapSource bitmapSource = new SoftwareBitmapSource();
await bitmapSource.SetBitmapAsync(sourceBitmap);
brush.ImageSource = bitmapSource;
brush.Stretch = Stretch.Fill;
this.VisualizationCanvas.Background = brush;
if (detectedFaces != null)
{
double widthScale = sourceBitmap.PixelWidth / this.VisualizationCanvas.ActualWidth;
double heightScale = sourceBitmap.PixelHeight / this.VisualizationCanvas.ActualHeight;
foreach (DetectedFace face in detectedFaces)
{
// Create a rectangle element for displaying the face box but since we're using a Canvas
// we must scale the rectangles according to the image’s actual size.
// The original FaceBox values are saved in the Rectangle's Tag field so we can update the
// boxes when the Canvas is resized.
Rectangle box = new Rectangle();
box.Tag = face.FaceBox;
box.Width = (uint)(face.FaceBox.Width / widthScale);
box.Height = (uint)(face.FaceBox.Height / heightScale);
box.Fill = this.fillBrush;
box.Stroke = this.lineBrush;
box.StrokeThickness = this.lineThickness;
box.Margin = new Thickness((uint)(face.FaceBox.X / widthScale), (uint)(face.FaceBox.Y / heightScale), 0, 0);
this.VisualizationCanvas.Children.Add(box);
}
}
}
Track faces in a sequence of frames
If you want to detect faces in video, it is more efficient to use the FaceTracker class rather than the FaceDetector class, although the implementation steps are very similar. The FaceTracker uses information about previously processed frames to optimize the detection process.
using Windows.Media;
using System.Threading;
using Windows.System.Threading;
Declare a class variable for the FaceTracker object. This example uses a ThreadPoolTimer to initiate face tracking on a defined interval. A SemaphoreSlim is used to make sure that only one face tracking operation is running at a time.
private FaceTracker faceTracker;
private ThreadPoolTimer frameProcessingTimer;
private SemaphoreSlim frameProcessingSemaphore = new SemaphoreSlim(1);
To initialize the face tracking operation, create a new FaceTracker object by calling CreateAsync. Initialize the desired timer interval and then create the timer. The ProcessCurrentVideoFrame helper method will be called every time the specified interval elapses.
this.faceTracker = await FaceTracker.CreateAsync();
TimeSpan timerInterval = TimeSpan.FromMilliseconds(66); // 15 fps
this.frameProcessingTimer = Windows.System.Threading.ThreadPoolTimer.CreatePeriodicTimer(new Windows.System.Threading.TimerElapsedHandler(ProcessCurrentVideoFrame), timerInterval);
The ProcessCurrentVideoFrame helper is called asynchronously by the timer, so the method first calls the semaphore's Wait method to see if a tracking operation is ongoing, and if it is the method returns without trying to detect faces. At the end of this method, the semaphore's Release method is called, which allows the subsequent call to ProcessCurrentVideoFrame to continue.
The FaceTracker class operates on VideoFrame objects. There are multiple ways you can obtain a VideoFrame including capturing a preview frame from a running MediaCapture object or by implementing the ProcessFrame method of the IBasicVideoEffect. This example uses an undefined helper method that returns a video frame, GetLatestFrame, as a placeholder for this operation. For information about getting video frames from the preview stream of a running media capture device, see Get a preview frame.
As with FaceDetector, the FaceTracker supports a limited set of pixel formats. This example abandons face detection if the supplied frame is not in the Nv12 format.
Call ProcessNextFrameAsync to retrieve a list of DetectedFace objects representing the faces in the frame. After you have the list of faces, you can display them in the same manner described above for face detection. Note that, because the face tracking helper method is not called on the UI thread, you must make any UI updates in within a call CoreDispatcher.RunAsync.
public async void ProcessCurrentVideoFrame(ThreadPoolTimer timer)
{
if (!frameProcessingSemaphore.Wait(0))
{
return;
}
VideoFrame currentFrame = await GetLatestFrame();
// Use FaceDetector.GetSupportedBitmapPixelFormats and IsBitmapPixelFormatSupported to dynamically
// determine supported formats
const BitmapPixelFormat faceDetectionPixelFormat = BitmapPixelFormat.Nv12;
if (currentFrame.SoftwareBitmap.BitmapPixelFormat != faceDetectionPixelFormat)
{
return;
}
try
{
IList<DetectedFace> detectedFaces = await faceTracker.ProcessNextFrameAsync(currentFrame);
var previewFrameSize = new Windows.Foundation.Size(currentFrame.SoftwareBitmap.PixelWidth, currentFrame.SoftwareBitmap.PixelHeight);
var ignored = this.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
this.SetupVisualization(previewFrameSize, detectedFaces);
});
}
catch (Exception e)
{
// Face tracking failed
}
finally
{
frameProcessingSemaphore.Release();
}
currentFrame.Dispose();
}