ML assisted data labeling completed the Training run phase, but how to start the Inference run?

jen 11 Reputation points
2022-09-21T03:18:38.027+00:00

I'm having trouble getting ML assisted data labeling to begin prelabeling.

I have an object detection Data Labeling project in Azure ML Studio workspace. ML Assisted is enabled from the project creation. I manually labeled the number required to start Training run (in my case, 45). Completed Training run without issue. Yet, the Inference run never started up. The info button when I hover over the "Prelabeled" Task Queue informs me the Inference run must be done before prelabeling can begin. How do I start the Inference run? Is there a manual step needed to start the Inference run? I thought it would happen automatically after the Training run completed (ensuring compute resource is available of course). I labeled more manually to trigger and complete a 2nd Training run, but Inference still did not start. I must be missing something, but not sure what to do.

The on-demand button in the Settings is for the Training run only, so it is not clear if the Inference run can be triggered on-demand.

Here's my project experiments summary on the dashboard view:

ML assisted data labeling experiments Experiment Latest run Run Status
Training cool_experiment AutoML_<hashid> Completed
Inference Experiment not started -- --

Azure Machine Learning
Azure Machine Learning
An Azure machine learning service for building and deploying models.
2,544 questions
{count} votes

2 answers

Sort by: Most helpful
  1. Ramr-msft 17,606 Reputation points
    2022-09-22T05:59:23.483+00:00

    @jen Thanks for the question. Once you have exported your labeled data to an Azure Machine Learning dataset, you can use AutoML to build computer vision models trained on your labeled data.

    Here is the Document to setup AutoML.

    0 comments No comments

  2. Shu 1 Reputation point Microsoft Employee
    2022-10-28T06:59:41.007+00:00

    Inference run is used to generate ML-Assisted pre-labeled task for labelers. There are two prerequisites for that to happen:

    1. There is a model - That is at least one training run completed successfully.
    2. There is a need - If the system believes there isn't enough tasks in the task queue. (For example, < 300 tasks that queue), it will start an inference run to generate pre-labeled tasks.