Send prediction requests to a deployment
After the deployment is added successfully, you can query the deployment for intent and entities predictions from your utterance based on the model you assigned to the deployment. You can query the deployment programmatically through the prediction API or through the client libraries (Azure SDK).
Test deployed model
You can use Language Studio to submit an utterance, get predictions and visualize the results.
Test the model
To test your model from Language studio
Select Test model from the left side menu.
Select the model you want to test. You can only test models that are assigned to deployments.
For multilingual projects, from the language dropdown, select the language of the utterance you are testing.
From deployment name dropdown, select your deployment name.
In the text box, enter an utterance to test. For example, if you created an application for email-related utterances you could type Delete this email.
From the top menu, click on Run the test.
After you run the test, you should see the response of the model in the result. You can view the results in entities cards view, or view it in JSON format.
Send a conversational language understanding request
After the deployment job is completed successfully, select the deployment you want to use and from the top menu click on Get prediction URL.
In the window that appears, copy the sample request URL and body into your command line.
<YOUR_QUERY_HERE>with the actual text you want to send to extract intents and entities from.
POSTcURL request in your terminal or command prompt. You'll receive a 202 response with the API results if the request was successful.