संपादित करें

इसके माध्यम से साझा किया गया


Train your orchestration workflow model

Training is the process where the model learns from your labeled utterances. After training is completed, you will be able to view model performance.

To train a model, start a training job. Only successfully completed jobs create a model. Training jobs expire after seven days, after this time you will no longer be able to retrieve the job details. If your training job completed successfully and a model was created, it won't be affected by the job expiring. You can only have one training job running at a time, and you can't start other jobs in the same project.

The training times can be anywhere from a few seconds when dealing with simple projects, up to a couple of hours when you reach the maximum limit of utterances.

Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to run predictions on the utterances in the testing set, and compares the predicted results with the provided labels (which establishes a baseline of truth). The results are returned so you can review the model’s performance.

Prerequisites

  • A successfully created project with a configured Azure blob storage account

See the project development lifecycle for more information.

Data splitting

Before you start the training process, labeled utterances in your project are divided into a training set and a testing set. Each one of them serves a different function. The training set is used in training the model, this is the set from which the model learns the labeled utterances. The testing set is a blind set that isn't introduced to the model during training but only during evaluation.

After the model is trained successfully, the model can be used to make predictions from the utterances in the testing set. These predictions are used to calculate evaluation metrics.

It is recommended to make sure that all your intents are adequately represented in both the training and testing set.

Orchestration workflow supports two methods for data splitting:

  • Automatically splitting the testing set from training data: The system will split your tagged data between the training and testing sets, according to the percentages you choose. The recommended percentage split is 80% for training and 20% for testing.

Note

If you choose the Automatically splitting the testing set from training data option, only the data assigned to training set will be split according to the percentages provided.

  • Use a manual split of training and testing data: This method enables users to define which utterances should belong to which set. This step is only enabled if you have added utterances to your testing set during labeling.

Note

You can only add utterances in the training dataset for non-connected intents only.

Train model

Start training job

To start training your model from within the Language Studio:

  1. Select Training jobs from the left side menu.

  2. Select Start a training job from the top menu.

  3. Select Train a new model and type in the model name in the text box. You can also overwrite an existing model by selecting this option and choosing the model you want to overwrite from the dropdown menu. Overwriting a trained model is irreversible, but it won't affect your deployed models until you deploy the new model.

    If you have enabled your project to manually split your data when tagging your utterances, you will see two data splitting options:

    • Automatically splitting the testing set from training data: Your tagged utterances will be randomly split between the training and testing sets, according to the percentages you choose. The default percentage split is 80% for training and 20% for testing. To change these values, choose which set you want to change and type in the new value.

    Note

    If you choose the Automatically splitting the testing set from training data option, only the utterances in your training set will be split according to the percentages provided.

    • Use a manual split of training and testing data: Assign each utterance to either the training or testing set during the tagging step of the project.

    Note

    Use a manual split of training and testing data option will only be enabled if you add utterances to the testing set in the tag data page. Otherwise, it will be disabled.

    A screenshot showing the train model page for conversational language understanding projects.

  4. Select the Train button.

Note

  • Only successfully completed training jobs will generate models.
  • Training can take some time between a couple of minutes and couple of hours based on the size of your tagged data.
  • You can only have one training job running at a time. You cannot start other training job wihtin the same project until the running job is completed.

Get training job status

Select the training job ID from the list, a side pane will appear where you can check the Training progress, Job status, and other details for this job.

Cancel training job

To cancel a training job from within Language Studio, go to the Train model page. Select the training job you want to cancel, and select Cancel from the top menu.

Next steps