Tutorial: Analyze live video with Azure Video Analyzer on IoT Edge and Azure Custom Vision
Alternatively, check out topics under Create video applications in the service.
Note
Azure Video Analyzer has been retired and is no longer available.
Azure Video Analyzer for Media is not affected by this retirement. It is now rebranded to Azure Video Indexer. Click here to read more.
In this tutorial, you'll learn how to use Azure Custom Vision to build a containerized model that can detect a toy truck and use the AI extensibility capability of Azure Video Analyzer on Azure IoT Edge to deploy the model on the edge for detecting toy trucks from a live video stream.
We'll show you how to bring together the power of Custom Vision to build and train a computer vision model by uploading and labeling a few images. You don't need any knowledge of data science, machine learning, or AI. You'll also learn about the capabilities of Video Analyzer and how to easily deploy a custom model as a container on the edge and analyze a simulated live video feed.
This tutorial uses an Azure virtual machine (VM) as an IoT Edge device and is based on sample code written in C#.
This tutorial uses an Azure virtual machine (VM) as an IoT Edge device and is based on sample code written in Python.
The tutorial shows you how to:
- Set up the relevant resources.
- Build a Custom Vision model in the cloud to detect toy trucks and deploy it on the edge.
- Create and deploy a pipeline with an HTTP extension to a Custom Vision model.
- Run the sample code.
- Examine and interpret the results.
If you don't have an Azure subscription, create a free account before you begin.
Suggested pre-reading
Read through the following articles before you begin:
- Video Analyzer on IoT Edge overview
- Azure Custom Vision overview
- Video Analyzer on IoT Edge terminology
- Pipeline concept
- Video Analyzer without video recording
- Tutorial: Developing an IoT Edge module
- How to edit deployment.*.template.json
Prerequisites
- Install Docker on your machine.
Prerequisites for this tutorial are:
An Azure account that includes an active subscription. Create an account for free if you don't already have one.
Note
You will need an Azure subscription where you have access to both Contributor role, and User Access Administrator role. If you do not have the right permissions, please reach out to your account administrator to grant you those permissions.
Visual Studio Code, with the following extensions:
Important
This Custom Vision module only supports Intel x86 and amd64 architectures. Check the architecture of your edge device before continuing.
Set up Azure resources
The deployment process will take about 20 minutes. Upon completion, you will have certain Azure resources deployed in the Azure subscription, including:
- Video Analyzer account - This cloud service is used to register the Video Analyzer edge module, and for playing back recorded video and video analytics.
- Storage account - For storing recorded video and video analytics.
- Managed Identity - This is the user assigned managed identity used to manage access to the above storage account.
- Virtual machine - This is a virtual machine that will serve as your simulated edge device.
- IoT Hub - This acts as a central message hub for bi-directional communication between your IoT application, IoT Edge modules and the devices it manages.
In addition to the resources mentioned above, following items are also created in the 'deployment-output' file share in your storage account, for use in quickstarts and tutorials:
- appsettings.json - This file contains the device connection string and other properties needed to run the sample application in Visual Studio Code.
- env.txt - This file contains the environment variables that you will need to generate deployment manifests using Visual Studio Code.
- deployment.json - This is the deployment manifest used by the template to deploy edge modules to the simulated edge device.
Tip
If you run into issues creating all of the required Azure resources, please use the manual steps in this quickstart.
Prerequisites for this tutorial are:
An Azure account that includes an active subscription. Create an account for free if you don't already have one.
Note
You will need an Azure subscription where you have access to both Contributor role, and User Access Administrator role. If you do not have the right permissions, please reach out to your account administrator to grant you those permissions.
- Visual Studio Code, with the following extensions:
- Azure IoT Tools
- Python
- Python 3 (3.6.9 or later), Pip 3 and optionally venv.
Important
This Custom Vision module only supports Intel x86 and amd64 architectures. Check the architecture of your edge device before continuing.
Set up Azure resources
The deployment process will take about 20 minutes. Upon completion, you will have certain Azure resources deployed in the Azure subscription, including:
- Video Analyzer account - This cloud service is used to register the Video Analyzer edge module, and for playing back recorded video and video analytics.
- Storage account - For storing recorded video and video analytics.
- Managed Identity - This is the user assigned managed identity used to manage access to the above storage account.
- Virtual machine - This is a virtual machine that will serve as your simulated edge device.
- IoT Hub - This acts as a central message hub for bi-directional communication between your IoT application, IoT Edge modules and the devices it manages.
In addition to the resources mentioned above, following items are also created in the 'deployment-output' file share in your storage account, for use in quickstarts and tutorials:
- appsettings.json - This file contains the device connection string and other properties needed to run the sample application in Visual Studio Code.
- env.txt - This file contains the environment variables that you will need to generate deployment manifests using Visual Studio Code.
- deployment.json - This is the deployment manifest used by the template to deploy edge modules to the simulated edge device.
Tip
If you run into issues creating all of the required Azure resources, please use the manual steps in this quickstart.
Review the sample video
This tutorial uses a toy car inference video file to simulate a live stream. You can examine the video via an application such as VLC media player. Select Ctrl+N, and then paste a link to the toy car inference video to start playback. As you watch the video, note that at the 36-second marker a toy truck appears in the video. The custom model has been trained to detect this specific toy truck.
In this tutorial, you'll use Video Analyzer on IoT Edge to detect such toy trucks and publish associated inference events to the IoT Edge hub.
Overview
This diagram shows how the signals flow in this tutorial. An edge module simulates an IP camera hosting a Real-Time Streaming Protocol (RTSP) server. An RTSP source node pulls the video feed from this server and sends video frames to the HTTP extension processor node.
The HTTP extension node plays the role of a proxy. It samples the incoming video frames set by you using the samplingOptions
field and also converts the video frames to the specified image type. Then it relays the image to the toy truck detector model built by using Custom Vision. The HTTP extension processor node gathers the detection results and publishes events to the Azure IoT Hub message sink node, which sends those events to the IoT Edge hub.
Build and deploy a Custom Vision toy detection model
As the name Custom Vision suggests, you can use it to build your own custom object detector or classifier in the cloud. It provides a simple, easy-to-use, and intuitive interface to build Custom Vision models that can be deployed in the cloud or on the edge via containers.
To build a toy truck detector, follow the steps in Quickstart: Build an object detector with the Custom Vision website.
Important
This Custom Vision module only supports Intel x86 and amd64 architectures only. Check the architecture of your edge device before continuing.
Additional notes:
- For this tutorial, don't use the sample images provided in the quickstart article's Prerequisites section. Instead, we've used a certain image set to build a toy detector Custom Vision model. Use these images when you're asked to choose your training images in the quickstart.
- In the tagging image section of the quick start, ensure that you're tagging the toy truck seen in the picture with the tag "delivery truck."
- Ensure to select General(compact) as the option for Domains when creating the Custom Vision project
After you're finished, you can export the model to a Docker container by using the Export button on the Performance tab. Ensure you choose Linux as the container platform type. This is the platform on which the container will run. The machine you download the container on could be either Windows or Linux. The instructions that follow were based on the container file downloaded onto a Windows machine.
You should have a zip file downloaded onto your local machine named
<projectname>.DockerFile.Linux.zip
.Check if you have Docker installed. If not, install Docker for your Windows desktop.
Unzip the downloaded file in a location of your choice. Use the command line to go to the unzipped folder directory. You should see the following two files - app\labels.txt and app\model.pb
Clone the Video Analyzer repository and use the command line to go to the edge-modules\extensions\customvision\avaextension folder
Copy the labels.txt and model.pb files from Step 3 into the edge-modules\extensions\customvision\avaextension folder. In the same folder -
Run the following commands:
docker build -t cvtruck .
This command downloads many packages, builds the Docker image, and tags it as
cvtruck:latest
.Note
If successful, you should see the following messages:
Successfully built <docker image id>
andSuccessfully tagged cvtruck:latest
. If the build command fails, try again. Sometimes dependency packages don't download the first time around.docker image ls
This command checks if the new image is in your local registry.
Set up your development environment
Get the sample code
Clone the AVA C# samples repository.
Start Visual Studio Code, and open the folder where the repo has been downloaded.
In Visual Studio Code, browse to the src/cloud-to-device-console-app folder and create a file named appsettings.json. This file contains the settings needed to run the program.
Browse to the file share in the storage account created in the setup step above, and locate the appsettings.json file under the "deployment-output" file share. Click on the file, and then hit the "Download" button. The contents should open in a new browser tab, which should look like:
{ "IoThubConnectionString" : "HostName=xxx.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=XXX", "deviceId" : "avasample-iot-edge-device", "moduleId" : "avaedge" }
The IoT Hub connection string lets you use Visual Studio Code to send commands to the edge modules via Azure IoT Hub. Copy the above JSON into the src/cloud-to-device-console-app/appsettings.json file.
Next, browse to the src/edge folder and create a file named .env. This file contains properties that Visual Studio Code uses to deploy modules to an edge device.
Browse to the file share in the storage account created in the setup step above, and locate the env.txt file under the "deployment-output" file share. Click on the file, and then hit the "Download" button. The contents should open in a new browser tab, which should look like:
SUBSCRIPTION_ID="<Subscription ID>" RESOURCE_GROUP="<Resource Group>" AVA_PROVISIONING_TOKEN="<Provisioning token>" VIDEO_INPUT_FOLDER_ON_DEVICE="/home/localedgeuser/samples/input" VIDEO_OUTPUT_FOLDER_ON_DEVICE="/var/media" APPDATA_FOLDER_ON_DEVICE="/var/lib/videoanalyzer" CONTAINER_REGISTRY_USERNAME_myacr="<your container registry username>" CONTAINER_REGISTRY_PASSWORD_myacr="<your container registry password>"
Copy the JSON from your env.txt into the src/edge/.env file.
Connect to the IoT Hub
In Visual Studio Code, set the IoT Hub connection string by selecting the More actions icon next to the AZURE IOT HUB pane in the lower-left corner. Copy the string from the src/cloud-to-device-console-app/appsettings.json file.
Note
You might be asked to provide Built-in endpoint information for the IoT Hub. To get that information, in Azure portal, navigate to your IoT Hub and look for Built-in endpoints option in the left navigation pane. Click there and look for the Event Hub-compatible endpoint under Event Hub compatible endpoint section. Copy and use the text in the box. The endpoint will look something like this:
Endpoint=sb://iothub-ns-xxx.servicebus.windows.net/;SharedAccessKeyName=iothubowner;SharedAccessKey=XXX;EntityPath=<IoT Hub name>
In about 30 seconds, refresh Azure IoT Hub in the lower-left section. You should see the edge device
avasample-iot-edge-device
, which should have the following modules deployed:- Edge Hub (module name edgeHub)
- Edge Agent (module name edgeAgent)
- Video Analyzer (module name avaedge)
- RTSP simulator (module name rtspsim)
Prepare to monitor the modules
When you use run this quickstart or tutorial, events will be sent to the IoT Hub. To see these events, follow these steps:
Open the Explorer pane in Visual Studio Code, and look for Azure IoT Hub in the lower-left corner.
Expand the Devices node.
Right-click on
avasample-iot-edge-device
, and select Start Monitoring Built-in Event Endpoint.Note
You might be asked to provide Built-in endpoint information for the IoT Hub. To get that information, in Azure portal, navigate to your IoT Hub and look for Built-in endpoints option in the left navigation pane. Click there and look for the Event Hub-compatible endpoint under Event Hub compatible endpoint section. Copy and use the text in the box. The endpoint will look something like this:
Endpoint=sb://iothub-ns-xxx.servicebus.windows.net/;SharedAccessKeyName=iothubowner;SharedAccessKey=XXX;EntityPath=<IoT Hub name>
Get the sample code
Clone the AVA Python samples repository.
Start Visual Studio Code, and open the folder where the repo has been downloaded.
In Visual Studio Code, browse to the src/cloud-to-device-console-app folder and create a file named appsettings.json. This file contains the settings needed to run the program.
Browse to the file share in the storage account created in the setup step above, and locate the appsettings.json file under the "deployment-output" file share. Click on the file, and then hit the "Download" button. The contents should open in a new browser tab, which should look like:
{ "IoThubConnectionString": "HostName=xxx.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=XXX", "deviceId": "avasample-iot-edge-device", "moduleId": "avaedge" }
The IoT Hub connection string lets you use Visual Studio Code to send commands to the edge modules via Azure IoT Hub. Copy the above JSON into the src/cloud-to-device-console-app/appsettings.json file.
Next, browse to the src/edge folder and create a file named .env. This file contains properties that Visual Studio Code uses to deploy modules to an edge device.
Browse to the file share in the storage account created in the setup step above, and locate the env.txt file under the "deployment-output" file share. Click on the file, and then hit the "Download" button. The contents should open in a new browser tab, which should look like:
SUBSCRIPTION_ID="<Subscription ID>" RESOURCE_GROUP="<Resource Group>" AVA_PROVISIONING_TOKEN="<Provisioning token>" VIDEO_INPUT_FOLDER_ON_DEVICE="/home/localedgeuser/samples/input" VIDEO_OUTPUT_FOLDER_ON_DEVICE="/var/media" APPDATA_FOLDER_ON_DEVICE="/var/lib/videoanalyzer" CONTAINER_REGISTRY_USERNAME_myacr="<your container registry username>" CONTAINER_REGISTRY_PASSWORD_myacr="<your container registry password>"
Copy the JSON from your env.txt into the src/edge/.env file.
Connect to the IoT Hub
In Visual Studio Code, set the IoT Hub connection string by selecting the More actions icon next to the AZURE IOT HUB pane in the lower-left corner. Copy the string from the src/cloud-to-device-console-app/appsettings.json file.
Note
You might be asked to provide Built-in endpoint information for the IoT Hub. To get that information, in Azure portal, navigate to your IoT Hub and look for Built-in endpoints option in the left navigation pane. Click there and look for the Event Hub-compatible endpoint under Event Hub compatible endpoint section. Copy and use the text in the box. The endpoint will look something like this:
Endpoint=sb://iothub-ns-xxx.servicebus.windows.net/;SharedAccessKeyName=iothubowner;SharedAccessKey=XXX;EntityPath=<IoT Hub name>
In about 30 seconds, refresh Azure IoT Hub in the lower-left section. You should see the edge device
avasample-iot-edge-device
, which should have the following modules deployed:- Edge Hub (module name edgeHub)
- Edge Agent (module name edgeAgent)
- Video Analyzer (module name avaedge)
- RTSP simulator (module name rtspsim)
Prepare to monitor the modules
When you use run this quickstart or tutorial, events will be sent to the IoT Hub. To see these events, follow these steps:
Open the Explorer pane in Visual Studio Code, and look for Azure IoT Hub in the lower-left corner.
Expand the Devices node.
Right-click on
avasample-iot-edge-device
, and select Start Monitoring Built-in Event Endpoint.Note
You might be asked to provide Built-in endpoint information for the IoT Hub. To get that information, in Azure portal, navigate to your IoT Hub and look for Built-in endpoints option in the left navigation pane. Click there and look for the Event Hub-compatible endpoint under Event Hub compatible endpoint section. Copy and use the text in the box. The endpoint will look something like this:
Endpoint=sb://iothub-ns-xxx.servicebus.windows.net/;SharedAccessKeyName=iothubowner;SharedAccessKey=XXX;EntityPath=<IoT Hub name>
Examine the sample files
In Visual Studio Code, browse to src/edge. You'll see the .env file that you created along with a few deployment template files.
The deployment template refers to the deployment manifest for the edge device with some placeholder values. The .env file has the values for those variables.
Next, browse to the src/cloud-to-device-console-app folder. Here you'll see the appsettings.json file that you created along with a few other files:
- c2d-console-app.csproj: This is the project file for Visual Studio Code.
- operations.json: This file lists the different operations that you want the program to run.
- Program.cs: This sample program code:
- Loads the app settings.
- Invokes the Azure Video Analyzer module's direct methods to create topology, instantiate the pipeline and activate it.
- Pauses for you to examine the pipeline output in the TERMINAL window and the events sent to the IoT hub in the OUTPUT window.
- Deactivates the live pipeline, deletes the live pipeline, and deletes the topology.
In Visual Studio Code, browse to src/edge. You'll see the .env file that you created along with a few deployment template files.
The deployment template refers to the deployment manifest for the edge device with some placeholder values. The .env file has the values for those variables.
Next, browse to the src/cloud-to-device-console-app folder. Here you'll see the appsettings.json file that you created along with a few other files:
- operations.json - This file will list the different operations that you would like the program to run
- main.py - This is the sample program code which does the following:
- Loads the app settings.
- Invokes the Azure Video Analyzer module's direct methods to create topology, instantiate the pipeline and activate it.
- Pauses for you to examine the pipeline output in the TERMINAL window and the events sent to the IoT hub in the OUTPUT window.
- Deactivates the live pipeline, deletes the live pipeline, and deletes the topology.
Generate and deploy the deployment manifest
In Visual Studio Code, go to src/cloud-to-device-console-app/operations.json.
Under
pipelineTopologySet
, ensure the following is true:
"pipelineTopologyUrl" : "https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/httpExtension/topology.json"
Under
livePipelineSet
, ensure:"topologyName" : "InferencingWithHttpExtension"
- Add the following to the top of the parameters array:
{"name": "inferencingUrl","value": "http://cv/score"},
- Change the
rtspUrl
parameter value to"rtsp://rtspsim:554/media/t2.mkv"
.
Under
pipelineTopologyDelete
, ensure"name": "InferencingWithHttpExtension"
.Right-click the src/edge/ deployment.customvision.template.json file, and select Generate IoT Edge Deployment Manifest.
This action should create a manifest file in the src/edge/config folder named deployment.customvision.amd64.json.
Open the src/edge/ deployment.customvision.template.json file, and find the
registryCredentials
JSON block. In this block, you'll find the address of your Azure container registry along with its username and password.Push the local Custom Vision container into your Azure Container Registry instance by following these steps on the command line:
Sign in to the registry by executing the following command:
docker login <address>
Enter the username and password when asked for authentication.
Note
The password isn't visible on the command line.
Tag your image by using this command:
docker tag cvtruck <address>/cvtruck
.Push your image by using this command:
docker push <address>/cvtruck
.If successful, you should see
Pushed
on the command line along with the SHA for the image.You can also confirm by checking your Azure Container Registry instance in the Azure portal. Here you'll see the name of the repository along with the tag.
Set the IoT Hub connection string by selecting the More actions icon next to the AZURE IOT HUB pane in the lower-left corner. You can copy the string from the appsettings.json file. (Here's another recommended approach to ensure you have the proper IoT hub configured within Visual Studio Code via the Select IoT Hub command.)
Next, right-click src/edge/config/deployment.customvision.amd64.json, and select Create Deployment for Single Device.
You'll then be asked to select an IoT Hub device. Select ava-sample-iot-edge-device from the drop-down list.
In about 30 seconds, refresh the Azure IoT hub in the lower-left section. You should have the edge device with the following modules deployed:
- Edge Hub (module name edgeHub)
- Edge Agent (module name edgeAgent)
- Video Analyzer (module name avaedge)
- RTSP simulator (module name rtspsim, which simulates an RTSP server that acts as the source of a live video feed)
- Custom Vision (module named cv, which is based on the toy truck detection model)
From these steps, the Custom Vision module has now been added.
Run the sample program
If you open the topology for this tutorial in a browser, you'll see that the value of inferencingUrl
has been set to http://cv/score
. This setting means the inference server will return results after detecting toy trucks, if any, in the live video.
In Visual Studio Code, open the Extensions tab (or select Ctrl+Shift+X) and search for Azure IoT Hub.
Right-click and select Extension Settings.
Search and enable Show Verbose Message.
To start a debugging session, select the F5 key. You see messages printed in the TERMINAL window.
- Navigate to the
TERMINAL
window in VS Code - Use the cd command to navigate to /video-analyzer-iot-edge-python-main/src/cloud-to-device-console-app directory
- Run "python main.py" then you will see messages printed in the
TERMINAL
window
- Navigate to the
The operations.json code starts off with calls to the direct methods
livePipelineList
andlivePipelineList
. If you cleaned up resources after you completed previous quickstarts, this process will return empty lists and then pause. To continue, select the Enter key.The TERMINAL window shows the next set of direct method calls:
- A call to
pipelineTopologySet
that uses the precedingpipelineTopologyUrl
. - A call to
livePipelineSet
that uses the following body:
{ "@apiVersion": "1.1", "name": "Sample-Pipeline-1", "properties": { "topologyName": "InferencingWithHttpExtension", "description": "Sample pipeline description", "parameters": [ { "name": "inferencingUrl", "value": "http://cv/score" }, { "name": "rtspUrl", "value": "rtsp://rtspsim:554/media/t2.mkv" }, { "name": "rtspUserName", "value": "testuser" }, { "name": "rtspPassword", "value": "testpassword" } ] } }
- A call to
livePipelineActivate
that activates the pipeline and the flow of video. - A second call to
livePipelineList
that shows that the active pipeline.
- A call to
The output in the TERMINAL window pauses at a Press Enter to continue prompt. Don't select Enter yet. Scroll up to see the JSON response payloads for the direct methods you invoked.
Switch to the OUTPUT window in Visual Studio Code. You see messages that the Video Analyzer on IoT Edge module is sending to the IoT hub. The following section of this tutorial discusses these messages.
The pipeline continues to run and print results. The RTSP simulator keeps looping the source video. To stop the pipeline, return to the TERMINAL window and select Enter. The next series of calls cleans up resources:
- A call to
livePipelineDeactivate
deactivates the pipeline. - A call to
livePipelineDelete
deletes the pipeline. - A call to
pipelineTopologyDelete
deletes the topology. - A final call to
pipelineTopologyList
shows that the list is empty.
- A call to
Interpret the results
When you run the pipeline, the results from the HTTP extension processor node pass through the IoT Hub message sink node to the IoT hub. The messages you see in the OUTPUT window contain a body section and an applicationProperties
section. For more information, see Create and read IoT Hub messages.
In the following messages, the Video Analyzer module defines the application properties and the content of the body.
MediaSessionEstablished event
When a pipeline is instantiated, the RTSP source node attempts to connect to the RTSP server that runs on the rtspsim-live555 container. If the connection succeeds, the following event is printed.
[IoTHubMonitor] [9:42:18 AM] Message received from [avasample-iot-edge-device/avaedge]:
{
"body": {
"sdp": "SDP:\nv=0\r\no=- 1586450538111534 1 IN IP4 XXX.XX.XX.XX\r\ns=Matroska video+audio+(optional)subtitles, streamed by the LIVE555 Media Server\r\ni=media/camera-300s.mkv\r\nt=0 0\r\na=tool:LIVE555 Streaming Media v2020.03.06\r\na=type:broadcast\r\na=control:*\r\na=range:npt=0-300.000\r\na=x-qt-text-nam:Matroska video+audio+(optional)subtitles, streamed by the LIVE555 Media Server\r\na=x-qt-text-inf:media/camera-300s.mkv\r\nm=video 0 RTP/AVP 96\r\nc=IN IP4 0.0.0.0\r\nb=AS:500\r\na=rtpmap:96 H264/90000\r\na=fmtp:96 packetization-mode=1;profile-level-id=4D0029;sprop-parameter-sets=XXXXXXXXXXXXXXXXXXXXXX\r\na=control:track1\r\n"
},
"applicationProperties": {
"dataVersion": "1.0",
"topic": "/subscriptions/{subscriptionID}/resourceGroups/{name}/providers/microsoft.media/videoanalyzers/{ava-account-name}",
"subject": "/edgeModules/avaedge/livePipelines/Sample-Pipeline-1/sources/rtspSource",
"eventType": "Microsoft.VideoAnalyzers.Diagnostics.MediaSessionEstablished",
"eventTime": "2021-04-09T09:42:18.1280000Z"
}
}
In this message, notice these details:
- The message is a diagnostics event.
MediaSessionEstablished
indicates that the RTSP source node (the subject) connected with the RTSP simulator and has begun to receive a simulated live feed. - In
properties
,subject
indicates that the message was generated from the RTSP source node in the pipeline. - In
properties
, the event type indicates that this event is a diagnostics event. - The event time indicates the time when the event occurred.
- The body contains data about the diagnostics event. In this case, the data comprises the Session Description Protocol (SDP) details.
Inference event
The HTTP extension processor node receives inference results from the Custom Vision container and emits the results through the IoT Hub message sink node as inference events.
{
"body": {
"timestamp": 145892470449324,
"inferences": [
{
"type": "entity",
"entity": {
"tag": {
"value": "delivery truck",
"confidence": 0.20541823
},
"box": {
"l": 0.6826309,
"t": -0.01415127,
"w": 0.3135161,
"h": 0.94683206
}
}
},
{
"type": "entity",
"entity": {
"tag": {
"value": "delivery truck",
"confidence": 0.14967085
},
"box": {
"l": 0.33310884,
"t": 0.03174839,
"w": 0.13532706,
"h": 0.54967254
}
}
},
{
"type": "entity",
"entity": {
"tag": {
"value": "delivery truck",
"confidence": 0.1352181
},
"box": {
"l": 0.48884687,
"t": 0.44746214,
"w": 0.025887,
"h": 0.05414263
}
}
}
]
},
"properties": {
"topic": "/subscriptions/...",
"subject": "/edgeModules/avaedge/livePipelines/Sample-Pipeline-1/processors/httpExtension",
"eventType": "Microsoft.VideoAnalyzer.Analytics.Inference",
"eventTime": "2021-05-14T21:24:09.436Z",
"dataVersion": "1.0"
},
"systemProperties": {
"iothub-connection-device-id": "avasample-iot-edge-device",
"iothub-connection-module-id": "avaedge",
"iothub-connection-auth-method": "{\"scope\":\"module\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}",
"iothub-connection-auth-generation-id": "637563926153483223",
"iothub-enqueuedtime": 1621027452077,
"iothub-message-source": "Telemetry",
"messageId": "96f7f0b5-728d-4e3e-a7bb-4e3198c58726",
"contentType": "application/json",
"contentEncoding": "utf-8"
}
Note the following information in the preceding messages:
- The subject in
properties
references the node in the pipeline from which the message was generated. In this case, the message originates from the HTTP extension processor. - The event type in
properties
indicates that this is an analytics inference event. - The event time indicates the time when the event occurred.
- The body contains data about the analytics event. In this case, the event is an inference event, so the body contains an array of inferences called predictions.
- The inferences section contains a list of predictions where a toy delivery truck (tag is "delivery truck") is found in the frame. As you recall, "delivery truck" is the custom tag that you provided to your custom trained model for the toy truck. The model inferences and identifies the toy truck in the input video with different probability confidence scores.
Clean up resources
If you intend to try the other tutorials or quickstarts, hold on to the resources you created. Otherwise, go to the Azure portal, browse to your resource groups, select the resource group under which you ran this tutorial, and delete all the resources.
Next steps
Review additional challenges for advanced users:
- Use an IP camera that has support for RTSP instead of using the RTSP simulator. You can search for IP cameras that support RTSP on the ONVIF conformant products page. Look for devices that conform with profiles G, S, or T.
- Use an AMD64 or x64 Linux device instead of an Azure Linux VM. This device must be in the same network as the IP camera. You can follow the instructions in Install Azure IoT Edge runtime on Linux.
Then register the device with Azure IoT Hub by following instructions in Deploy your first IoT Edge module to a virtual Linux device.