Share via


AI composition

Note

Azure Video Analyzer has been retired and is no longer available.

Azure Video Analyzer for Media is not affected by this retirement. It is now rebranded to Azure Video Indexer. Click here to read more.

This article gives a high-level overview of Azure Video Analyzer support for three kinds of AI composition.

Sequential AI composition

AI nodes can be sequentially composed. This allows a downstream node to augment inferences generated by an upstream node.

Sequential AI composition

Key aspects

  • Pipeline extensions act as media passthrough nodes and can be configured such that external AI servers receive frames at different rates, formats and resolutions. Additionally, configuration can be specified such that external AI servers can receive all frames or only frames, which already contain inferences.
  • Inferences are added to the frames as they go through the different extension nodes, an unlimited number of such nodes can be added in sequence.
  • Other scenarios such as continuous video recording or event-based video recording can be combined with sequential AI composition.

Parallel AI composition

AI nodes can also be composed in parallel instead of in sequence. This allows independent inferences to be performed on the ingested video stream, saving ingest bandwidth on the edge.

Parallel AI composition

Key aspects

  • Video can be split into an arbitrary number of parallel branches and such split can happen at any point after the following nodes.

    • RTSP source
    • Motion Detector
    • Pipeline extension

Combined AI composition

Both sequential and parallel composition constructs can be combined to develop complex composable AI pipelines. This is possible since AVA pipelines allow extension nodes to be combined sequentially and/or with a parallel composition indefinitely alongside other supported nodes.

Combined AI composition

Next steps

Analyze live video streams with multiple AI models using AI composition