How to perform face redaction on specific faces

Phil 166 Reputation points
2022-07-05T11:57:38.877+00:00

Hi there
I am looking at the new face redaction within azure from:
https://learn.microsoft.com/en-us/azure/media-services/latest/analyze-face-redaction-concept#elements-of-the-output-json-file

I have downloaded the sample code from git hub and have got the face redaction working with the automatic mode (which blurs all faces).
I then ran just the analyze mode like:

 Preset faceAnalyzePreset = new FaceDetectorPreset(  
                   resolution: AnalysisResolution.SourceResolution,  
                   mode: FaceRedactorMode.Analyze  
               );  

This returned me a list of thumbnails and some json with ids.

I then want to provide a list of ids for the faces to be redacted (the documentation states this should be possible).

The second pass of the workflow takes a larger number of inputs that must be combined into a single asset.

This includes a list of IDs to blur, the original video, and the annotations JSON. This mode uses the annotations to apply blurring on the input video.

The output from the Analyse pass does not include the original video. The video needs to be uploaded into the input asset for the Redact mode task and selected as the primary file.

I'm not sure how to pass in multiple assets into a single asset. I have the mp4, the json provided and have created a .txt file with some ids as the table shows.

I can then create a preset to use a mode of "redact".

  Preset faceRedactionPreset = new FaceDetectorPreset(  
                    resolution: AnalysisResolution.SourceResolution,  
                    mode: FaceRedactorMode.Redact, // Use the Combined mode here. This is the single pass mode where detection and blurring happens as one pass - if you want to analyze and get JSON results first before blur, use Analyze mode, followed by Redact mode.   
                    blurType: BlurType.Med // Sets the amount of blur. For debugging purposes you can set this to Box to just see the outlines of the faces.  
                );  

When creating an input asset it only seems to accept a single file (the mp4 video). I am unsure how to give it the video, the json and the text file.
I tried uploading all 3 assets as I thought maybe if all 3 were available it might handle that on the azure side.

await CreateInputAssetAsync(client, config.ResourceGroup, config.AccountName, inputAssetName, InputMP4FileName);
await CreateInputAssetAsync(client, config.ResourceGroup, config.AccountName, inputJsonName, InputJsonFileName);
await CreateInputAssetAsync(client, config.ResourceGroup, config.AccountName, inputTxtName, InputTxtFileName);

Doing this however generates an error.

I can't find much documentation on how to perform the 2 pass variation. The documentation explicitly states a maximum number of faces and that you can provide an optional file to choose which faces. But I can't work out what format or how the files need to be sent to azure.

Any help would be greatly appreciated.

Azure Media Services
Azure Media Services
A group of Azure services that includes encoding, format conversion, on-demand streaming, content protection, and live streaming services.
319 questions
{count} votes

Accepted answer
  1. John Deutscher (MSFT) 2,126 Reputation points
    2022-07-07T17:21:54.353+00:00

    Hi @Phil

    I hate to discourage customers from using features, but I would recommend that you try not to take a dependency right now on this feature for your use case scenario. Reason being is that we are re-evaluating this face redaction feature in light of current changes around Responsible AI.
    We may be deprecating this feature in the very near future, and I would hate for you to get stuck on it.

    See our recent announcement here -https://azure.microsoft.com/en-us/blog/responsible-ai-investments-and-safeguards-for-facial-recognition/

    The specific section that is important for this feature is the following:
    "new customers need to apply for access to use facial recognition operations in Azure Face API, Computer Vision, and Video Indexer. Existing customers have one year to apply and receive approval for continued access to the facial recognition services based on their provided use cases. By introducing Limited Access, we add an additional layer of scrutiny to the use and deployment of facial recognition to ensure use of these services aligns with Microsoft’s Responsible AI Standard and contributes to high-value end-user and societal benefit. This includes introducing use case and customer eligibility requirements to gain access to these services. Read about example use cases, and use cases to avoid, here."

    For Azure Media Services, we are not planning to introduce new "gating" for responsible AI. We will be migrating some of our Face detection AI capabilities over to the Video Indexer service where we can consolidate all responsible AI development going forward with proper gating.

    Hope that makes sense to you and feel free to ask questions.

    As a solution for the short term, you may want to look into using an open source AI framework for the face detection, like NVidia or Intels AI offerings that can be integrated with GStreamer to manipulate video. I'm not sure if they have a specific redaction sample, but that would be a good place to build from.


0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.