How to deploy ML Designer pipeline as real-time inference pipeline using N-Gram

ID_27051995 76 Reputation points
2020-11-25T12:28:06.153+00:00

Hi,
i deployed a real-time inference pipeline using ML Designer. Training and deploying works fine. But when I'm consuming/testing my API it doesn't work. Postman gives me Errorcode 500 and "Internal Server Error. Run: Server internal error is from Module Extract N-Gram Features from Text".

This is my training pipeline:
42733-image.png

I read this: https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/machine-learning/algorithm-module-reference/extract-n-gram-features-from-text.md#score-or-publish-a-model-that-uses-n-grams

But I don't know how to achieve this.

Thanks in advance.

Azure Machine Learning
Azure Machine Learning
An Azure machine learning service for building and deploying models.
3,334 questions
0 comments No comments
{count} votes

Accepted answer
  1. Lu Zhang (MSFT) 86 Reputation points
    2020-11-26T05:06:13.387+00:00

    Once you create a real-time inference pipeline, please make the further modifications below:

    1. Find the output Result_vocabulary dataset from Extract N-Gram Features from Text module.
      42884-findmoduleoutputdataset.png
    2. Register the dataset as with a name
      42905-registerdataset.png
    3. Update real-time inference pipeline like below:
      42865-inferencepipeline.png

    We will improve the documentation accordingly. Thanks for reporting the issue!

    0 comments No comments

1 additional answer

Sort by: Most helpful
  1. Vivek Sinha 1 Reputation point
    2021-04-28T06:35:33.063+00:00

    Hi @Lu Zhang (MSFT) do not see Output datasets to select for registering them. How should I proceed? I have also attached screenshot.

    92011-image.png

    Input datasets
    None

    Output datasets
    None

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.