Analyzing live videos without recording

Note

Azure Video Analyzer has been retired and is no longer available.

Azure Video Analyzer for Media is not affected by this retirement. It is now rebranded to Azure Video Indexer. Click here to read more.

Suggested pre-reading

Overview

You can use a pipeline topology to analyze live video, without recording any portions of the video to a file or an asset. The pipeline topologies shown below are similar to the ones in the article on Event-based video recording, but without a video sink node or file sink node.

Note

Analyzing live videos is currently available only for edge module and not for cloud.

Motion detection

The pipeline topology shown below consists of an RTSP source node, a motion detection processor node, and an IoT Hub message sink node - you can see the settings used in its JSON representation. This topology enables you to detect motion in the incoming live video stream and relay the motion events to other apps and services via the IoT Hub message sink node. The external apps or services can trigger an alert or send a notification to appropriate personnel.

Detecting motion in live video

Analyzing video using a custom vision model

The pipeline topology shown below enables you to analyze a live video stream using a custom vision model packaged in a separate module. You can see the settings used in its JSON representation. There are other examples available for wrapping models into IoT Edge modules that run as an inference service.

Analyzing live video using a custom vision module

In this pipeline topology, the video input from the RTSP source is sent to an HTTP extension processor node, which sends image frames (in JPEG, BMP, or PNG formats) to an external inference service over REST. The results from the external inference service are retrieved by the HTTP extension node, and relayed to the IoT Edge hub via IoT Hub message sink node. This type of pipeline topology can be used to build solutions for a variety of scenarios, such as understanding the time-series distribution of vehicles at an intersection, understanding the consumer traffic pattern in a retail store, and so on.

Tip

You can manage the frame rate within the HTTP extension processor node using the samplingOptions field before sending it downstream.

An enhancement to this example is to use a motion detector processor ahead of the HTTP extension processor node. This will reduce the load on the inference service, since it is used only when there is motion activity in the video.

Analyzing live video using a custom vision module on frames with motion

Next steps

Quickstart: Analyze a live video feed from a (simulated) IP camera using your own HTTP model