Azure Video Analyzer has been retired and is no longer available.
Azure Video Analyzer for Media is not affected by this retirement. It is now rebranded to Azure Video Indexer. Click here to read more.
The following tables list validated sample Azure Video Analyzer live pipeline topologies. These topologies can be further customized according to solution needs. The tables also provide
A short description,
Topology's corresponding sample tutorial(s), and
The corresponding pipeline topology name of the Visual Studio Code (VSCode) Video Analyzer extension.
Clicking on a topology name redirects to the corresponding JSON file located in this GitHub folder, clicking on a sample redirects to the corresponding sample document, and clicking on a VSCode name redirects to a screenshot of the sample topology.
Perform CVR. A subset of the video frames is sent to an external AI inference engine using the sharedMemory mode for data transfer via the gRPC extension. The results are then published to the IoT Edge Hub.
Perform CVR. A subset of the video frames is sent to an external AI inference engine via the HTTP extension. The results are then published to the IoT Edge Hub.
Perform CVR and track objects in a live feed. Inference metadata from an external AI inference engine is published to the IoT Edge Hub, and can be played back with the video.
When an event of interest is detected by the external AI inference engine via the gRPC extension, those events are published to the IoT Edge Hub. The events are used to trigger the signal gate processor node that results in the appending of new clips to the Azure Video Analyzer video, corresponding to when the event of interest was detected.
When an event of interest is detected by the external AI inference engine via the HTTP extension, those events are published to the IoT Edge Hub. The events are used to trigger the signal gate processor node that results in the appending of new clips to the Azure Video Analyzer video, corresponding to when the event of interest was detected.
Use an object detection AI model to look for objects in the video, and record video clips only when a certain type of object is detected. The trigger to generate these clips is based on the AI inference events published onto the IoT Hub.
Record video clips to the local file system of the edge device whenever an external sensor sends a message to the pipeline topology. For example, the sensor can be a door sensor.
Perform event-based recording of video clips to the cloud and to the edge. When motion is detected from a live video feed, events are sent to a signal gate processor node that opens, allowing video to pass through to a file sink node and a video sink node. As a result, new files are created on the local file system of the Edge device, and new video clips are appended to your Video Analyzer video. The recordings contain the frames where motion was detected.
When motion is detected, those events are published to the IoT Edge Hub. In addition, the motion events are used to trigger the signal gate processor node that will send frames to the video sink node when motion is detected. As a result, new video clips are appended to the Azure Video Analyzer video, corresponding to when motion was detected.
When motion is detected from a live video feed, events are sent to a signal gate processor node that opens, sending frames to a file sink node. As a result, new files are created on the local file system of the edge device, containing the frames where motion was detected.
Perform event-based recording in the presence of motion. When motion is detected from a live video feed, those events are published to the IoT Edge Hub. In addition, the motion events are used to trigger a signal gate processor node that will send frames to a video sink node only when motion is present. As a result, new video clips are appended to the Azure Video Analyzer video, corresponding to when motion was detected. Additionally, run video analytics only when motion is detected. Upon detecting motion, a subset of the video frames is sent to an external AI inference engine via the gRPC extension. The results are then published to the IoT Edge Hub.
Perform event-based recording in the presence of motion. When motion is detected in a live video feed, those events are published to the IoT Edge Hub. In addition, the motion events are used to trigger a signal gate processor node that will send frames to a video sink node only when motion is present. As a result, new video clips are appended to the Azure Video Analyzer video, corresponding to when motion was detected. Additionally, run video analytics only when motion is detected. Upon detecting motion, a subset of the video frames is sent to an external AI inference engine via the HTTP extension. The results are then published to the IoT Edge Hub.
Run video analytics on a live video feed. The gRPC extension allows you to create images at video frame rate from the camera that are converted to images, and sent to the OpenVINO™ DL Streamer - Edge AI Extension module provided by Intel. The results are then published to the IoT Edge Hub.
Run video analytics on a live video feed. A subset of the video frames from the camera are converted to images, and sent to an external AI inference engine. The results are then published to the IoT Edge Hub.
Run video analytics on a live video feed. A subset of the video frames from the camera are converted to images, and sent to the OpenVINO™ Model Server – AI Extension module provided by Intel. The results are then published to the IoT Edge Hub.
Live video is sent to an external spatialAnalysis module that counts people in a designated zone. When the criteria defined by the AI operation is met, events are sent to a signal gate processor that opens, sending the frames to a video sink node. As a result, a new clip is appended to the Azure Video Analyzer video resource.
Live video is sent to an external spatialAnalysis module that tracks when a person crosses a designated line. When the criteria defined by the AI operation is met, events are sent to a signal gate processor that opens, sending the frames to a video sink node. As a result, a new clip is appended to the Azure Video Analyzer video resource.
Live video is sent to an external spatialAnalysis module that emits an event when a person enters or exists a zone. When the criteria defined by the AI operation is met, events are sent to a signal gate processor that opens, sending the frames to a video sink node. As a result, a new clip is appended to the Azure Video Analyzer video resource.
Live video is sent to an external spatialAnalysis module that tracks when people violate a distance rule. When the criteria defined by the AI operation is met, events are sent to a signal gate processor that opens, sending the frames to a video sink node. As a result, a new clip is appended to the Azure Video Analyzer video resource.
Live video is sent to an external spatialAnalysis module that carries out a supported AI operation. When the criteria defined by the AI operation is met, events are sent to a signal gate processor that opens, sending the frames to a video sink node. As a result, a new clip is appended to the Azure Video Analyzer video resource.
Run 2 AI inferencing models of your choice. In this example, classified video frames are sent from an AI inference engine using the Tiny YOLOv3 model to another engine using the YOLOv3 model. Having such a topology enables you to trigger a heavy AI module, only when a light AI module indicates a need to do so.
Track objects in a live video feed. The object tracker comes in handy when you need to detect objects in every frame, but the edge device does not have the necessary compute power to be able to apply the vision model on every frame.
Use a computer vision model to detect objects in a subset of frames when they cross a virtual line in a live video feed. The object tracker node is used to track those objects in the frames and pass them through a line-crossing node. The line-crossing node comes in handy when you want to detect objects that cross the imaginary line and emit events.