List of pipeline topologies

Note

Azure Video Analyzer has been retired and is no longer available.

Azure Video Analyzer for Media is not affected by this retirement. It is now rebranded to Azure Video Indexer. Click here to read more.

The following tables list validated sample Azure Video Analyzer live pipeline topologies. These topologies can be further customized according to solution needs. The tables also provide

  • A short description,
  • Topology's corresponding sample tutorial(s), and
  • The corresponding pipeline topology name of the Visual Studio Code (VSCode) Video Analyzer extension.

Clicking on a topology name redirects to the corresponding JSON file located in this GitHub folder, clicking on a sample redirects to the corresponding sample document, and clicking on a VSCode name redirects to a screenshot of the sample topology.

Live pipeline topologies

Continuous video recording

Name Description Samples VSCode Name
cvr-video-sink Perform continuous video recording (CVR). Capture video and continuously record it to an Azure Video Analyzer video. Continuous video recording and playback Record to Video Analyzer video
cvr-with-grpcExtension Perform CVR. A subset of the video frames is sent to an external AI inference engine using the sharedMemory mode for data transfer via the gRPC extension. The results are then published to the IoT Edge Hub. Record using gRPC Extension
cvr-with-httpExtension Perform CVR. A subset of the video frames is sent to an external AI inference engine via the HTTP extension. The results are then published to the IoT Edge Hub. Record using HTTP Extension
cvr-with-httpExtension-and-objectTracking Perform CVR and track objects in a live feed. Inference metadata from an external AI inference engine is published to the IoT Edge Hub, and can be played back with the video. Record and stream inference metadata with video
cvr-with-motion Perform CVR. When motion is detected from a live video feed, relevant inferencing events are published to the IoT Edge Hub. Record on motion detection
audio-video Perform CVR and record audio using the outputSelectors property. Record audio with video

Event-based video recording

Name Description Samples VSCode Name
evr-grpcExtension-video-sink When an event of interest is detected by the external AI inference engine via the gRPC extension, those events are published to the IoT Edge Hub. The events are used to trigger the signal gate processor node that results in the appending of new clips to the Azure Video Analyzer video, corresponding to when the event of interest was detected. Develop and deploy gRPC inference server Record using gRPC Extension
evr-httpExtension-video-sink When an event of interest is detected by the external AI inference engine via the HTTP extension, those events are published to the IoT Edge Hub. The events are used to trigger the signal gate processor node that results in the appending of new clips to the Azure Video Analyzer video, corresponding to when the event of interest was detected. Record using HTTP Extension
evr-hubMessage-video-sink Use an object detection AI model to look for objects in the video, and record video clips only when a certain type of object is detected. The trigger to generate these clips is based on the AI inference events published onto the IoT Hub. Event-based video recording and playback Record to Video Analyzer video based on inference events
evr-hubMessage-file-sink Record video clips to the local file system of the edge device whenever an external sensor sends a message to the pipeline topology. For example, the sensor can be a door sensor. Record to local files based on inference events
evr-motion-video-sink-file-sink Perform event-based recording of video clips to the cloud and to the edge. When motion is detected from a live video feed, events are sent to a signal gate processor node that opens, allowing video to pass through to a file sink node and a video sink node. As a result, new files are created on the local file system of the Edge device, and new video clips are appended to your Video Analyzer video. The recordings contain the frames where motion was detected. Record motion events to Video Analyzer video and local files
evr-motion-video-sink When motion is detected, those events are published to the IoT Edge Hub. In addition, the motion events are used to trigger the signal gate processor node that will send frames to the video sink node when motion is detected. As a result, new video clips are appended to the Azure Video Analyzer video, corresponding to when motion was detected. Detect motion, record video to Video Analyzer Record motion events to Video Analyzer video
evr-motion-file-sink When motion is detected from a live video feed, events are sent to a signal gate processor node that opens, sending frames to a file sink node. As a result, new files are created on the local file system of the edge device, containing the frames where motion was detected. Detect motion and record video on edge devices Record motion events to local files

Motion detection

Name Description Samples VSCode Name
motion-detection Detect motion in a live video feed. When motion is detected, those events are published to the IoT Hub. Get started with Azure Video Analyzer, Get started with Video Analyzer in the portal, Detect motion and emit events Publish motion events to IoT Hub
motion-with-grpcExtension Perform event-based recording in the presence of motion. When motion is detected from a live video feed, those events are published to the IoT Edge Hub. In addition, the motion events are used to trigger a signal gate processor node that will send frames to a video sink node only when motion is present. As a result, new video clips are appended to the Azure Video Analyzer video, corresponding to when motion was detected. Additionally, run video analytics only when motion is detected. Upon detecting motion, a subset of the video frames is sent to an external AI inference engine via the gRPC extension. The results are then published to the IoT Edge Hub. Analyze live video with your own model - gRPC Analyzer motion events using gRPC Extension
motion-with-httpExtension Perform event-based recording in the presence of motion. When motion is detected in a live video feed, those events are published to the IoT Edge Hub. In addition, the motion events are used to trigger a signal gate processor node that will send frames to a video sink node only when motion is present. As a result, new video clips are appended to the Azure Video Analyzer video, corresponding to when motion was detected. Additionally, run video analytics only when motion is detected. Upon detecting motion, a subset of the video frames is sent to an external AI inference engine via the HTTP extension. The results are then published to the IoT Edge Hub. Analyze live video with your own model - HTTP Analyze motion events using HTTP Extension

Extensions

Name Description Samples VSCode Name
grpcExtensionOpenVINO Run video analytics on a live video feed. The gRPC extension allows you to create images at video frame rate from the camera that are converted to images, and sent to the OpenVINO™ DL Streamer - Edge AI Extension module provided by Intel. The results are then published to the IoT Edge Hub. Analyze live video with Intel OpenVINO™ DL Streamer – Edge AI Extension
httpExtension Run video analytics on a live video feed. A subset of the video frames from the camera are converted to images, and sent to an external AI inference engine. The results are then published to the IoT Edge Hub. Analyze live video with your own model - HTTP, Analyze live video with Azure Video Analyzer on IoT Edge and Azure Custom Vision Analyze video using HTTP Extension
httpExtensionOpenVINO Run video analytics on a live video feed. A subset of the video frames from the camera are converted to images, and sent to the OpenVINO™ Model Server – AI Extension module provided by Intel. The results are then published to the IoT Edge Hub. Analyze live video using OpenVINO™ Model Server – AI Extension from Intel Analyze video with Intel OpenVINO Model Server

Computer vision

Name Description Samples VSCode Name
spatial-analysis/person-count-operation-topology Live video is sent to an external spatialAnalysis module that counts people in a designated zone. When the criteria defined by the AI operation is met, events are sent to a signal gate processor that opens, sending the frames to a video sink node. As a result, a new clip is appended to the Azure Video Analyzer video resource. Person count operation with Computer Vision for Spatial Analysis
spatial-analysis/person-line-crossing-operation-topology Live video is sent to an external spatialAnalysis module that tracks when a person crosses a designated line. When the criteria defined by the AI operation is met, events are sent to a signal gate processor that opens, sending the frames to a video sink node. As a result, a new clip is appended to the Azure Video Analyzer video resource. Person crossing line operation with Computer Vision for Spatial Analysis
spatial-analysis/person-zone-crossing-operation-topology Live video is sent to an external spatialAnalysis module that emits an event when a person enters or exists a zone. When the criteria defined by the AI operation is met, events are sent to a signal gate processor that opens, sending the frames to a video sink node. As a result, a new clip is appended to the Azure Video Analyzer video resource. Live Video with Computer Vision for Spatial Analysis Person crossing zone operation with Computer Vision for Spatial Analysis
spatial-analysis/person-distance-operation-topology Live video is sent to an external spatialAnalysis module that tracks when people violate a distance rule. When the criteria defined by the AI operation is met, events are sent to a signal gate processor that opens, sending the frames to a video sink node. As a result, a new clip is appended to the Azure Video Analyzer video resource. Person distance operation with Computer Vision for Spatial Analysis
spatial-analysis/custom-operation-topology Live video is sent to an external spatialAnalysis module that carries out a supported AI operation. When the criteria defined by the AI operation is met, events are sent to a signal gate processor that opens, sending the frames to a video sink node. As a result, a new clip is appended to the Azure Video Analyzer video resource. Custom operation with Computer Vision for Spatial Analysis

AI composition

Name Description Samples VSCode Name
ai-composition Run 2 AI inferencing models of your choice. In this example, classified video frames are sent from an AI inference engine using the Tiny YOLOv3 model to another engine using the YOLOv3 model. Having such a topology enables you to trigger a heavy AI module, only when a light AI module indicates a need to do so. Analyze live video streams with multiple AI models using AI composition Record to the Video Analyzer service using multiple AI models

Miscellaneous

Name Description Samples VSCode Name
object-tracking Track objects in a live video feed. The object tracker comes in handy when you need to detect objects in every frame, but the edge device does not have the necessary compute power to be able to apply the vision model on every frame. Track objects in a live video Record video based on the object tracking AI model
line-crossing Use a computer vision model to detect objects in a subset of frames when they cross a virtual line in a live video feed. The object tracker node is used to track those objects in the frames and pass them through a line-crossing node. The line-crossing node comes in handy when you want to detect objects that cross the imaginary line and emit events. Detect when objects cross a virtual line in a live video Record video based on the line crossing AI model

Next steps

Understand Video Analyzer pipelines.