Test your model
Once your model is successfully trained, you can use translations to evaluate the quality of your model. In order to make an informed decision about whether to use our standard model or your custom model, you should evaluate the delta between your custom model BLEU score and our standard model Baseline BLEU. If your model is trained within a narrow domain, and your training data is consistent with the test data, you can expect a high BLEU score.
BLEU score
BLEU (Bilingual Evaluation Understudy) is an algorithm for evaluating the precision or accuracy of text that is machine translated from one language to another. Custom Translator uses the BLEU metric as one way of conveying translation accuracy.
A BLEU score is a number between zero and 100. A score of zero indicates a low-quality translation where nothing in the translation matched the reference. A score of 100 indicates a perfect translation that is identical to the reference. It's not necessary to attain a score of 100—a BLEU score between 40 and 60 indicates a high-quality translation.
Model details
Select the Model details blade.
Select the model name. Review the training date/time, total training time, number of sentences used for training, tuning, testing, and dictionary. Check whether the system generated the test and tuning sets. Use the
Category ID
to make translation requests.Evaluate the model BLEU score. Review the test set: the BLEU score is the custom model score and the Baseline BLEU is the pretrained baseline model used for customization. A higher BLEU score means there's high translation quality using the custom model.
Test quality of your model's translation
Select Test model blade.
Select model Name.
Human evaluate translation from your Custom model and the Baseline model (our pretrained baseline used for customization) against Reference (target translation from the test set).
If the training results are satisfactory, place a deployment request for the trained model.